For the layman – How 3D Rendering Works. Part 8: Rendering

[ about a 20 min. read ]
Part 1: 3D models
Part 2: Materials & Shaders
Part 3: Textures
Part 4: Projection mapping & UV mapping
Part 5: Camera
Part 6: Composition
Part 7: Lighting

In the previous article we discussed lighting. Specifically techniques and methods you have available to light your scene. But what exactly is light in a 3D scene? How does a 3D render software actually go from a lit scene to a final image? Well this is what we call rendering.
What rendering precisely is and what technique it actually uses to generate an image will be addressed in this article.

1. What is Rendering?

Rendering in general is the process of generating an image through the means of a computer program. The image generated is called the render. Rendering is done in many different applications. Like video games, video editing, simulations, architecture & product visualizations. The rendering process, and the outcome of the rendering process, is not exactly the same for each application.

For this article we focus on rendering for the application of generating a photorealistic architecture or product visualization. These are so called pre-rendering applications. Which basically means, not real-time (real-time is for example rendering in video games). The main technique used for generating such images is called Ray Tracing. There are also other methods out there, but Ray Tracing creates the best photorealistic result. And it is also the technique used by the software I use for my renders.

2. What is Ray Tracing?

Ray tracing is a bit technical, but bear with me. Ray tracing is a rendering algorithm that calculates how a 3D scene is lit. It does this by shooting view rays from a camera through each pixel of the image that the camera will be rendering. For each view ray it is determined if it hits an object or not. When it hits an object the view ray can either, reflect, refract (depending on the objects material and shader attributes) or trace its path to the light sources in the scene to determine if something is in shadow or not. This process roughly looks like the images below.

Illuminated: A view ray is shot from the camera through a pixel of an image. It hits the sphere. From the sphere a shadow ray is traced to a light source to determine if the sphere is in shadow or not.

In Shadow: A view ray is shot from the camera through a different pixel of the image. It hits the ground. From the ground a shadow ray is traced to a light source to determine if the ground is in shadow or not. The shadow ray passes through the sphere. So the ground is in shadow.

Reflection: A view ray is shot from the camera through a different pixel of the image. It hits the surface of the sphere at a certain angle. The sphere has a reflective material. The view ray is reflected of the surface of the material and hits a different object. From that object shadow rays are again traced to a light source to see if the object is in shadow or not. The lighting value from that point is traced back to the pixel in the image to influence the final color of that pixel.

Refraction: The same view ray from the reflection example is considered for the refraction. It hits the surface of the sphere. The sphere has a refractive material, like glass. The view ray is refracted and passes through the glass sphere and hits another sphere. From that object shadow rays are again traced to a light source to see if the object is in shadow or not. The lighting value from that point is traced back to the pixel in the image to influence the final color of that pixel.

How often a view ray bounces, reflects or refracts in a scene until the algorithm stops, depends on predetermined values that we can set in a software’s render settings.

The final color of the pixel is a combination of the lighting values that are returned for the shadow, reflection and refraction rays. These all depend on the material and shader attributes of the objects in the scene, and the lighting attributes.

This ray tracing process is done for each pixel to create an image. Now in modern rendering software there are more aspects then just, shadow, reflection and refraction rays. But the above process describes the basics of ray tracing.

3. Samples

The samples are the amount of rays calculated per pixel. The more samples, the better and smoother the final image. But more samples also means longer render times. The comparisons below show how an image quality improves as the samples increase from 1 to 10, and 10 to 100.

4. Render time

One of the most important things to consider with rendering, is the total time it takes to render an image. This is also called the render time. The render time is influenced by a lot of things. Among others these are:

  • Amount of polygons in a scene
  • Amount of lights in a scene
  • Material & shader properties. Like for example: Reflectivity and refractivity
  • Amount of view ray bounces allowed
  • Global illumination ray bounces
  • The shadow quality
  • Texture sizes
  • Image resolution (total amount of pixels)
  • Render times for a single image can normally last from a view seconds to several hours. In some modern day animation movies, render times for a single frame can even last up to 24 hours or more! For a 90 minute feature film with 24 frames per second, you have 129.600 frames to render. At 24 hour per frame, that means 355 years of render time!

    Now these huge Hollywood animation studios have large so-called render farms at their disposal. These are basically hundreds of computers linked together to provide huge amounts of processing power. Freelancers or smaller studios don’ have those resources at the tip of their fingers. So you always need to stay aware of your render times to properly be able to produce a piece of artwork.

    5. Making the actual render/Render passes

    So now we know how ray trace rendering works. We can tweak al the relevant quality settings. Like image resolution, amount of ray bounces, etc. And we can perform the actual render. The software starts rendering and we end up with a final render. But a render software does more than simply output a final color image. It renders all attributes necessary to build up the final images separately. And you can output all these separate channels and more if you want.

    These separate channel outputs are often called render passes. These render passes represent different information channels of the render. Each pass only holds the specific data of that pass. For example. Below you can see the color, reflectivity, refractivity and shadow pass for the binoculars render:

    Basic color

    Reflection pass

    Refraction pass

    Shadow Pass

    These channels, amongst others that are produced during rendering, are combined by the render software to make the final unedited render of the binoculars that you see below:

    But you can also output other information channels. These channels can be for example very useful during further editing of your final render in a post-processing software. You can see some example below:

    Object ID’s/Clown pass

    Depth pass

    Ambient Occlusion pass

    Surface Normals pass

    I am not going in to detail on what these last 4 passes are and how to use them. That will be something for my future articles. For now they just serve the purpose of demonstrating other render passes that can be output during rendering. Each pass can possible serve a valuable purpose in the post-processing. Which will be the topic for my next and final article in this series on “For the layman”.

    As always, let me know if you have any questions or comments below.

    Leave a Reply

    Your email address will not be published. Required fields are marked *