Correct modelling of lighting is one of the most computationally difficult problem in graphics, and with Maxwell, our objective was to make an enormous leap forward in the capability of the GPU to perform near-photo-real lighting calculations in real time.
In the real world, all objects are lit by a combination of direct light (photons that travel directly from a light source to illuminate an object) and indirect light (photons that travel from the light source, hit one object and bounce off of it and then hit a second object, thus indirectly illuminating that object). “Global illumination” (GI) is a term for lighting systems that model this effect. Without indirect lighting, scenes can look harsh and artificial. However, while light received directly is fairly simple to compute, indirect lighting computations are highly complex and computationally heavy.
Scene rendered with direct lighting only.
The same scene with GI enabled, note the indirect light and specular reflections on the floor.
Because it’s a computationally expensive lighting technique (particularly in highly detailed scenes), GI has been primarily used to render complex CG scenes in movies using offline GPU rendering farms. While some forms of GI have been used in many of today’s most popular games, their implementations have relied on pre-computed lighting. These “prebaked” techniques are used for performance reasons; however, they require additional artwork, as the desired lighting effects must be computed beforehand. Because prebaked lighting is not dynamic, it’s often difficult or impossible to update the indirect light sources when in-game changes occur; say for instance an additional light source is added or something in the scene moves or is destroyed. Prebaked indirect lighting models the static objects of the scene, but doesn’t properly apply to the animated characters or moving objects.
In 2011, NVIDIA engineers developed and demonstrated an innovative new approach to computing a fast, approximate form of global illumination dynamically in real time on the GPU. This new GI technology uses a voxel grid to store scene and lighting information, and a novel voxel cone tracing process to gather indirect lighting from the voxel grid. NVIDIA’s Cyril Crassin describes the technique in his paper on the topic and a video from GTC 2012 is available here. Epic’s ‘Elemental’ Unreal Engine 4 tech demo from 2012 used a similar technique.
Since that time, NVIDIA has been working on the next generation of this technology—VXGI—that combines new software algorithms and special hardware acceleration in the Maxwell architecture.
The Cornel Box
In 1984 researchers at Cornel University developed a simple scene to use as an experiment for rendering graphics. The idea was to keep the scene simple so their computer generated images could be compared to photos easily.
The above image is generated using our state of the art Iray rendering technology that uses the power of many GPUS running in parallel in the cloud to photo realistically render in near real time. This image is very close to what this cornel box would look like in real life.
VXGI’s goal however is to run in real time, and doing full ray tracing of the scene is too computationally intense, so approximations are required.
The image above shows an opacity voxel view of the same scene. Each cube is a coarse view of the underlying geometry which we can use to speed up light calculations.
The emissive voxel view is where the magic happens. Imagine that each cube (voxel) is being directly lit by whichever direct lights are in the scene. We can correctly calculate color and intensity of the light hitting voxels using existing very fast direct lighting techniques. What’s new though, is that in the next stage these illuminated voxels will act as sources of light, generating indirect lighting.
The colors in this image can be a bit confusing. Start by focusing on the walls. The wall that was red now has a pale green glow and the green wall has a pale red glow. That is caused by that fact that each wall is now taking on some of the color emitted from the opposite side of the room. The red wall is also gently illuminating the ball nearest it and so on and so forth. Effectively we are viewing the light reflected from the surfaces hit by the original direct light sources.
When we add all of the light together we can generate incredibly realistic scenes. They key point though is that by using VXGI we can do all of this in real time.
The final result is shown in our Moon Landing tech demo, which debunks decades-old conspiracy theories: