Ray Tracing in C++

Glenn Wysen - Spring 2020

spheres
Fig.1 - A simple scene built with ray tracing.

Overview

The main goal of this project was trying to figure out how light works in images and how to ray-trace through a scene to calculate illuminance. I started by implementing a basic ray generation function as well as an intersection algorithm that could tell if a given ray hit a given object. This sounds great but is very computationally costly, especially if there are hundreds of thousands or even millions of objects in a scene that need to get tested for each ray. This cost is greatly alleviated by my implementation of bounding volume hierarchies (BVHs). Using a BVH allows us to throw out objects that the ray will never interact with, resulting in a drastic speed increase. The next thing to implement was lighting. I did this with random sampling as well as importance sampling of light sources for each pixel. Lastly, I implemented adaptive sampling so that areas that didn't need so many samples per pixel could save computational cost by only being sampled a small number of times compared to more complex parts of the image.

Ray Generation and Intersection

The primitive part of my ray generation algorithm involves randomly sampling a given number of rays through each pixel. The rays I used are generated by transforming image space (x, y coordinates) into camera space (x, y, z=-1 coordinates) and then into world space (unconstrained x, y, z coordinates) where I can access the global illumination at that point. After doing this for each ray through different parts of the pixel, I average the samples and write that to the buffer.

The triangle intersection function I wrote first finds the plane that the triangle lies on using the cross product of two edges. From there it is easy to find where the ray intersects with the plane by setting the two equations equal and solving for the time variable. Once the coordinates of the point are known, a barycentric triangle test can be used to determine if the point lies inside the triangle being tested.

empty room
Fig.2 - A simple empty room rendered using only triangle ray intersections. The color comes from the angle of the ray relative to the camera.
room with spheres
Fig.3 - The same room with two ray traced spheres added.

Bounding Volume Hierarchy

My BVH construction algorithm works by first testing if a given ray intersects with the current box. If it does, and the current box is a leaf, I check each primitive in the leaf box and test that for intersection with the ray. If the current box is not a leaf, I recursively call the same algorithm on the left and right children of the current node. This process continues all the way down the tree until every node that needs to be tested is tested. The way I chose to split the boxes was to find out which axis varied the most over the current node. I then created a plane halfway thorough that axis and put everything to one side of the plane in the left child and everything on the other side in the right child. One interesting bug I ran into was when a leaf node only contained one primitive and it was an axis aligned triangle, the bounding box intersect function would never return a hit because the "bounding box" of the triangle was actually a 2D rectangle. I was able to fix this by adding a small delta to each of the maximum corners x, y, and z values.

Timing wise, Fig 4. took 98.4 seconds to render without using any BVH acceleration structures. However, after the BVH was implemented, it only took 0.2369 seconds to render. Fig 5. took so long that I stopped the program after ten minutes because I didn't want to wait before implementing BVH acceleration, but after the acceleration was in place it took only 2.3352 seconds to render.

cow
Fig.4 - A cow mesh that was rendered in only 0.2 seconds using BVH acceleration.
lucy
Fig.5 - A complex mesh with hundreds of thousands of triangles that was rendered in under 5 seconds.

Direct Illumination

The direct lighting function I wrote had two implementations. The first method was to randomly shoot out a ray somewhere in a uniform hemisphere from the point being sampled and record the light coming in from that random direction. I would then average all the samples to get the final spectrum returned for that point. The result of this implementation can be seen in Fig.6 The other form of sampling I used was to shoot out rays towards the light source in the image instead of over a random hemisphere. This technique led to less noise (illustrated in Fig.7) because the samples were taken in a smarter way.

spheres hemi sampled
Fig.6 - Scene rendered using random hemisphere sampling.
spheres light sampled
Fig.7 - The same scene now using light source sampling.

The next set of four figures shows the difference in noise levels in soft shadows when rendering with 1, 4, 16, and 64 light rays respectively and only one sample per pixel. It can clearly be seen how the shadows in the image go from very noisy in Fig.8 to smooth in Fig.11 as the number of light rays increases.

spheres 1
Fig.8 - Scene rendered using 1 light ray.
spheres 4
Fig.10 - Scene rendered using 16 light rays.
spheres 16
Fig.9 - Scene rendered using 4 light rays.
spheres 64
Fig.11 - Scene rendered using 64 light rays.

Global Illumination

My indirect lighting algorithm works by first calculating the one bounce illumination at the given point. Then, if the ray hasn't exceeded a max depth parameter, it gets recursively added to with a probability determined by a roulette probability. Inside the recursion I trace a new ray out from the intersect point in a random direction (calculated earlier when getting the one bounce illumination) but with a depth reduced by one. The depth reduction ensures that a ray will not bounce forever around the scene. That ray (and it's intersection with the next object in the scene) are then passed back into the radiance generation function to get their radiance which is finally added to the one bounce radiance from earlier.

direct lighting
Fig.12 - Scene rendered using only direct illumination.
indirect lighting
Fig.13 - Scene rendered using only indirect illumination.
global lighting
Fig.14 - Scene rendered using global illumination by combining direct and indirect.

Adaptive Sampling

Adaptive Sampling is implemented in the general pixel tracing function by keeping track of running totals of the illuminance and illuminance squared for each sample. Then a check is made to see if the illuminance has converged to an acceptable amount by calculating if it is below a certain threshold (in this project I used 0.05 * the mean of the samples). If this condition was met, the program will stop sampling and move on to the next pixel.

The sample rate image (Fig.15) shows how the adaptive sampling changes depending on which part of the image is being rendering. Red colors represent a high sample rate whereas blue colors indicate very few samples were taken for that pixel. The rates are computed as the ratio between the actual number of samples taken at a given pixel and the global maximum number of samples allowed.

adaptive sampling
Fig.15 - The sample rate image for the bunny scene used throughout the project.