Materials Rendering

Bryan Tong

This expands upon my Path Tracer to include more complicated materials, such as mirror, glass, and microfacted surfaces, environmental lighting scenarios, and perceived depth of lense simulations.

Part 1: Ray Generation and Intersection

Here, we can see a sequence of renders of CBspheres.dae set to different max_ray_depth levels. We can see that as we allow more levels of depth for rays to bounce in, the mirror and glass spheres become increasingly more detailed, well-lit, and the room becomes overall noiser. We see that with 0 and 1 ray bounces, the surfaces of both mirror and glass are intuitively black, as no light has reflected or refracted. Refraction becomes barely possible with level 2, only accurately reflecting and refracting as a glass sphere would on level 3. On level 4, shadow and light effects coming off of the ball and onto the floor bcome evident. On level 5 and above, we see the image becomes increasingly noisy as light affects become apparent on the wall.

CBspheres.dae with max_ray_depth set to 0
CBspheres.dae with max_ray_depth set to 1
CBspheres.dae with max_ray_depth set to 2
CBspheres.dae with max_ray_depth set to 3
CBspheres.dae with max_ray_depth set to 4
CBspheres.dae with max_ray_depth set to 5
CBspheres.dae with max_ray_depth set to 100
CBspheres.dae with replica settings to staff

Part 2: Microfacet Materials

We can see that as alpha increases, the opaque reflectiveness of it decreases, and the diffused (less chrome-like) metallic reflectiveness of it increases. Interestingly, the dragon appears properly metallic at any of these alpha levels, yet every change in alpha looks vastly different.

Examining the differences between cosine hemisphere sampling and my importance sampling method is very difficult to see. At most, my importance sampling method looks a tiiiny bit harsher, but the lack of clear differences is to be expected. Both are correct results as mentioned in the spec, my importance sampling method is just more efficient, and thus faster, at rendering.

In the last image, we see a very speckly, blue, water-ish dragon. This is because I looked up the values for liquid H2O, aka water, as a conductor and used the eta and k values as given on the website in the spec.

alpha = 0.005
alpha = 0.05
alpha = 0.25
alpha = 0.5
cosine hemisphere sampling
my importance sampling
Dragon with conductor material as water

Part 3: Environment Light

Environment light uses uniform and/or importance sampling to replicate realistic lighting environments by referencing infinitely-far incident radiances surrounding the entire 3D model.

We can see that the save_probability_debug() output matches as in the spec; thus, the distributions seem to be working. Onwards to comparisons!

If we inspect each bunny closely, we can see that the noise levels in uniform sampling are higher in both bunny_unlit and bunny_microfacet.
This intuitively makes sense, proving that importance sampling is indeed better and worth all of the math necessary to implement it.

Output of the probability debug function
bunny_unlit.dae with uniform sampling
bunny_unlit.dae with importance sampling
microfacet_cu with uniform sampling
microfacet_cu with importance sampling

Part 4: Depth of Field

A pinhole camera model keeps everything in the photo in-focus. This works differently from human eyes and real cameras, which we will model as lenses with finite aperatures. An example of this in real life on DSLRs is when we use prime lenses, for example, we obtain a very shallow depth of field. This is similar to the thin lens we are implementing in this part of the project, simulating said shallow DoF.

On the left column of images, we can observe a "focus stack." On the right, a comparison of different aperature sizes.

focal distance = 4.3
aperature = 0.17
focal distance = 4.5
aperature = 0.25
focal distance = 4.7
aperature = 0.35
focal distance = 4.9
aperature = 0.7