Final Renders. Click to enlarge.

Reference Images

For my final capstone project, I wanted to have some closure in a sense on a concept that I had previously tried to do a few times during my time at school; that being creating cg artwork at a miniature scale. In my previous works I had fallen short in doing this, and so with my new experience and knowledge I had accumulated, I set out to make a convincing photoreal render mimicking macrophotography. 
Code Red by Toby Johnson, David Hollenbeck, Ed Barrera, Brooke Floyd, Victoria de la Hoz, Cole Laberge, Marla Jones
A big issue I had was that I did take real-world reference, and so the materials and textures just didn't hit that level of realism I was hoping to achieve. 
Inspiration
One of my big influences for this project was the impressive photorealism accomplished by Animal Logic in The Lego Movie. I was interested in how they made their characters so life-like, and how it looked like they were actual Legos that would be played with by kids.
One thing that the artists did when researching their method for creating these assets was to take photo reference of their scenes using lego that they had built themselves. This gave them a much better understanding of how Lego appears at a macro scale. 

I wanted to achieve a similar look to the Legos in The Lego Movie, so I decided I would do a focused study on look development and rendering using a minifigure character, focusing on recreating macrophotography reference and using real-world processes. 
REFERENCE GATHERING
My initial idea for the project was to have the scene be set outdoors. But when thinking about the scope of the project, I realized that I would prefer to focus on the character than the rest of the environment. Still, I wanted to practice working with the probe lens I was using to capture reference, so I went and shot some photos at a local park.
I needed to have reference where I had a greater control over the lighting, so I moved my environment to a studio setting, and captured my reference there. 
I kept the lighting setup simple: One singular key that filled out one side of the face, I wanted to make sure my lighting was giving the face shape.
I then went about gathering reference for the material properties of my minifigure. I noticed that there are quite a few different materials in the reference, that being the base plastic, the paint that informed the clothing and facial expressions, and a metallic paint that acted as buttons on the shirt. 

In order to accurately shade this, I would need to layer the materials in the correct order. To do so, I researched how Lego manufactures its minifigures, and I was given some crucial information.
Material Layering Workflow
I found a fascinating video that goes through the manufacturing process step-by-step for making minifigures. The paint is pressed on with an ink pad in a process called pad printing. 

Click on the image to cycle between.

I noticed that not all of the paint was placed at once; it was done in separate layers. This process informed some of the interactions between the paint that I found, like one layer of paint bleeding through to the next. 

I decided to layer my minifigure in the following order, from base to top most.
1. Base Plastic - what makes up the yellow head and red body.
2. Main Paint  - all of the beard and facial features, as well as the details in the shirt
3. Metallic Paint - the buttons on the shirt
4. Outline Paint - all of the black paint that acts as an outline for the eyes, mouth, and shirt
Initial Passes
The model for the minifigure was done by Timothee Maron for FlippedNormals. I purchased it to ensure my model had the highest fidelity and closeness to the real thing. Here is the link for those interested.
The first thing I did was extract the design of the head and body from the reference I took. I wanted the designs to be exact, so instead of recreating the shapes in Substance Painter I created masks using the UV's of my model. Using photoshop, I color selected each paint color and assigned them to the layers described above.

I mainly used the color selection tool for the more complex shapes like the black outline and tan lines.

I then took each mask and placed it on my uv map so it would line up correctly in Substance.

Once I had masks for each layer I needed, I went into Substance Painter and began to texture, creating folders to isolate the different layers of paint.

I imported the model with the parts exploded to avoid any mesh map issues.

This was my first pass, just getting the larger paint shapes into place. I realized that the color wasn't showing up correctly from photoshop, and everything had this desaturated, washed out color. I realized it would be better to just export the paint layers as masks so that I would have more control.

I also exported my maps into Renderman and did a test render, and the results were similar. 
I knew I had a lot of work to do, and that I would have to look further into the material properties of Lego to get a more realistic look.
For the roughness map, my textures would have to have a lot of small scratches and damages from wear. Smaller items have a much subtler edge wear, but they still get visible marks from even the slightest of damage. This was visible in my reference. I also took a deep look at the color and spent a good amount of time just trying to get those values to what I wanted. 
I was starting to get feel better about the roughness breakup in the body. My attention shifted to softening the facial features more to match the reference. I also added very subtle height variation to the paint areas to given them a raised feel. 
I went in and added much finer details in my roughness and diffuse maps to give it a better look. Once I was ready to export my texture maps, I isolated each layer and rendered the maps at 8K resolution so that they would hold up even at very close distances. I was ready to build my layered shader. 
PxrLayerSurface Shader 
The obvious solution to get each paint layer separated from the other was by using the PxrLayerSurface shader in Renderman. It requires a mask and a PxrLayer input for each layer. Each PxrLayer has its own shader parameters similar to a PxrSurface.

Node network for the minifigure material.

Each layer uses a diffuse and roughness map. The mask for each paint layer is put directly underneath its PxrLayer. The displacement map is applied to the whole material in a separate node. 
I now had my shader network set up to layer the materials from bottom to top like I had noticed in the manufacturing video. The separation now meant that each layer could have its own properties independent of each other, like specular face color in the metallic paint, or subsurface scattering in the base plastic.
In order to maintain physical accuracy in my minifigure, I wanted to make sure any available values for things like IOR were used. Lego minifigures are made out of ABS plastic, which has an IOR (index of refraction) value of 1.460. Once I had the base material set, I imported my textures from substance painter.
Here are my texture maps on the model. I also included the masks I created to demonstrate the control I had within the shader network to make adjustments after texturing. 
Subsurface Parameters
A major parameter that had to be done accurately in my shader was subsurface scattering. Having an accurate subsurface depth and color would give me the soft plastic feel I was missing. In order to do this, I observed how my minifigure interacted with light scatter through it. I determined that the head and legs had the most, and the body barely had any. 
To differentiate the values, I would need to mask out each body part and assign a different scattering distance based on my approximation. The subsurface color and mean free path color would be the same throughout the pieces, so those stayed the same.
My instructor introduced me to a node in Renderman called the PxrLayeredBlend, which allows you to do that, but with color information. So I made an ID mask in Substance Painter with a separate RGB value for each body part, and then used those colors in the PxrLayeredBlend to adjust the values.

Each color could be a separate value for scattering distance in the plastic.

The Mean Free Path Distance could be adjusting by changing the value of the color in the blend. Thanks Caleb!

Unfortunately, scattering depth for real-world plastic is not a value that is readily available, so now came the process of iterating and visually testing to see which one looked the most like my reference. 
To make the process of determining values simpler, I used a process called wedging, in which you render multiple versions of the same image, but incrementing the value you want to test by a certain factor to see the difference. It is great for iterating quickly.
After wedging a few renders with incrementing depth values, I determined that a value of 30 for the head, .125 for the body, and 1 for the legs were the closest to my reference. 

When the distance wasn't deep enough, the minifigure looks too hard and fake. When its too deep, it looks like it's almost too gummy, and some artifacting is present. Somewhere in the middle was looking the best.

Scene Lighting and Composition
Next came the challenge of matching lighting for the scene.
My setup when I was shooting my reference was just one large key so I could accurately extract the shapes for the paint. I ended up liking the subtler lighting anyway so I wanted to recreate it. 
In my initial tests, I was unhappy with how flat and fake my renders were coming out. It seemed like the materials were off and weren't interacting how I wanted them to.
There were a lot of texturing changes I needed to make as well, but I was more concerned with how the light interacted with the base plastic.
I realized a lot of the depth and indirect lighting that was shaping the character in my reference was from the environment I took it in, which was a photography studio. So I went back and captured a 360 HDRI image of the studio with a Ricoh Theta camera and used that as my dome light for the scene.

With the HDRI image, I could lower the exposure of the dome light while still keeping those important ceiling lights that shape the top of the lego head. 

With this implemented, I immediately noticed that my renders were getting closer and closer to looking like my reference, but something was still off. 
I was using the wrong focal length! The probe lens I used had a fixed 24mm lens, so I adjusted my Maya camera to reflect that, and the model and reference lined up nearly perfectly.
I also realized my lights had a slight green tint in my reference so i adjusted that as well, and dropped the exposure of the key a little. 
 Tweaks and rendering
Once I had my textures imported in and my lighting and camera setup looking the way I wanted, I was ready to render. I would have single frame renders of each reference image I took, as well as some very close up images of the body and head. I would also render a turntable so the full minifigure can be viewed.

Before I could do that, I wanted to take one last look at my most recent progress and see what small tweaks I could make before I called the project finished. 

Because I had masks of pretty much every part I need to adjust, I was able to use the pxrBlend node in Renderman to isolate areas that I wanted to modify the color or roughness of.
Using a mask of just the head of the mini figure, I was able to isolate the roughness map of just that area. I wanted to decrease the contrast of my map there, so by blending the head and rest of the body I was able to have control over both. 
Final Thoughts
This project was extremely helpful in making me proficient at looking at reference and breaking down how objects are made. I learned so much about light interaction and color, as well as macrophotography and good practices for it. 

In terms of what I could adjust or develop further, I think there's much more I can do to push the softness of the paint in the face, emphasizing things like bleed and height variation between layers.

I look forward to working on a project that focuses on small things, I think there is so much beauty in the detail that is unveiled in everyday objects we would never expect. I would like to go even deeper, maybe two, or three times the distance I reached in my renders.

Special Thanks:
Caleb Kicklighter, Jeff Nichols, Matthew Justice
Back to Top