Building a Rasterizer

Glenn Wysen - Spring 2020

lion
Fig.1 - An image built only using triangles of different color.

Overview

This project was primarily focused on how to translate real world geometry onto a screen made up of square pixels. It covers basic rasterization of triangles and then goes in depth on how to make those triangles look more similar to their non-digital counterparts. Techniques used for this include supersampling rasterized triangles, sampling from texture-space, and using mipmaps to counteract moiré. Additionally, basic transformation vectors were used to manipulate shapes and Barycentric coordinates were used to interpolate colors evenly across triangles. I thought it was interesting how effective bilinear interpolation is at smoothing images with not much computational power or memory requirements.

Rasterizing Single-Color Triangles

Triangles are rasterized in my project by iterating over the smallest possible box that contains the triangle and for each point checking to see if that point is inside the triangle. The method I used to check is based off of Christer Ericson's Real-Time Collision Detection but with some performance optimization by moving certain calculations outside of the for loops. The box was defined by using the minimum x (floor(x)) and minimum y (floor(y)) coordinates of the triangle for the upper left corner and the maximum x (ceil(x)) and maximum y (ceil(y)) coordinates for the lower right. This held computation to a minimum while still capturing the entirety of the triangle.

basic triangels
Fig.2 - A collection of triangles with an interesting block highlighted where there appears to be a break in the triangle due to sampling the center of each pixel.

Antialiasing Triangles

The method I used to smooth out jaggies and antialias the triangles was to super-sample within each pixel at different rates. This was implemented with sampling rates of 1 (default value for no supersampling), 4, 9, and 16 samples per pixel, although the algorithm I wrote should work for any square number. The taken samples (in the orientation shown in Fig.3) were then each rasterized via the method above and stored in an intermediate super-sample buffer. After collecting all the super-samples, I had to downsize each box's super-samples so that each pixel only had one color value. This was done by averaging all the super-samples in the super-sample buffer and then writing the averaged value to the framebuffer.

supersample example
Fig.3 - A visual representation of supersampling. The corner of the triangle is sampled with a rate of 4 samples per pixel.
super 4
Fig.5 - The original image with a sample rate of 4. Notice how the gaps in the triangle are disappearing.
super 1
Fig.4 - The original image with a sample rate of 1. This is the baseline that will be compared to higher sampling rates.
super 16
Fig.6 - The original image with a sample rate of 16. In this final image the points of the triangle appear much more sharp.

Barycentric Coordinates

Barycentric coordinates are a way to represent points inside a triangle based off of how close they are to each vertex. It can be useful to think of the α, β, and γ values as weights that correspond to how much each vertex "pulls" on the sample point. Using this line of thinking it is clear that the sum of all three must be one. As seen in Fig.7 the points close to each vertex are weighed heavily towards its respective vertex, resulting in red, green, and blue corners. It may also help to look at the point halfway between the red vertex and the blue vertex, where the weights are (α = 0.5, β = 0, γ = 0.5) resulting in a purple color (red and blue mixed with equal parts, and no green).

triangle color
Fig.7 - Example of a larger scale single triangle with rgb vertices.
colored circle
Fig.8 - Color wheel generated from Barycentric interpolated triangles.

Level Sampling with Mipmaps for Texture Mapping

First, what are mipmaps? Mipmaps are smaller versions of an image texture, representing different levels of detail. They are stored in a series of decreasing sizes with each subsequent level half as small as the previous one. For example, if the original image was 256x256px, the first mipmap level would be 128x128, the third 64x64 and so on. Level sampling with mipmaps is a way of choosing different mipmaps to use for different areas of an image. This is done by taking incremental steps in the x and y direction (dx and dy) in the original image and mapping those points into the texture-space. Once this is done, we can compare the lengths of the new vectors (du and dv) and find the maximum of the two. By taking the base 2 log of that maximum, we can get a good approximation for which mipmap level to use for the given area of the image.

When comparing all of the different types of antialiasing implemented in this project, several things become clear. First, supersampling seems to always be effective at reducing jaggies and creating a better looking image, however it is very memory intensive (especially in the 16 sample per pixel case). Second, sampling from different mipmap levels provides a great amount of antialiasing power, especially with the linear interpolation of mipmap levels. Last, using bilinear interpolation of pixels is almost as effective as the variable mipmap levels, and when combined with bilinear mipmap sampling (commonly called trilinear texture filtering) it makes for a very smooth image with great antialiasing power. Below you can see the difference between different combinations of level sampling and pixel sampling on an embroidered Cal logo on my friend Rafael's visor.

raf 1
Fig.10 - Nearest pixel sampling with level 0 mipmap.
raf 2
Fig.11 - Bilinear sampling with level 0 mipmap.
raf 3
Fig.12 - Nearest pixel sampling with linearly interpolated mipmap.
raf 4
Fig.13 - Bilinear sampling with linearly interpolated mipmap.