Feature Articles

NVIDIA's next-gen gaming graphics card, GeForce GTX 980 revealed!

By Vijay Anand - 19 Sep 2014

Voxel Global Illumination (VXGI)

** Updated on 28 September - More details and video added for the section of "Debunking a myth".

The problem - recreating realism is expensive

These days, the realism of any game comes down to how close an in-game scene can match that of what one observes in real life. While accurate structural representation of objects and view-dependant level of detail are fairly well handled dynamically and on-the-fly, it is the accurate lighting and shading within the game that makes or breaks the experience these days.

In our real world, everything that we experience and observe is lit by direct and indirect lighting. Representing the former accurately has been attainable for some time now given the graphics horsepower that we're endowed with along with object representation, factoring in material properties and much more.

However, capturing the effects of indirect lighting - as defined by NVIDIA: photons that travel from the light source, hit one object and bounce off of it and then hit a second object, thus indirectly illuminating that object - to complement direct lighting has proven to be the real challenge as it's computationally very intensive. And without indirect lighting factored into the mix, the in-game scenes can look harsh or even artificial.

Here's a perfect example with a scene rendered in direct lighting only.

And this is the same scene with global illumination enabled to capture indirect light sources to more accurately represent the real world.

To overcome that limitation, you might be familiar with the term "global illumination", which is a lighting system to model indirect lighting. Even so, most games employ pre-computed lighting, screen-space effects (such as reflections and ambient occlusions), virtual point lights along with other tweaks/post-processing and specific artwork to cater reproducing the intended lighting effects. These pre-baked techniques are used primarily for performance reasons.

The downside of pre-computed and assisted lighting techniques are that it's not dynamic and is impossible to update indirect lighting characteristics when major in-game changes occur such a point source of light is shot down or objects/scenes are malformed or destroyed. As such, it's suitable for static areas of a scene and not for animatable characters and objects. Given the fact that games are increasingly being designed to cater to dynamic terrain and levels that correspond to actual user intervention, real-time global illumination calculation is needed to keep pace with the realism expected and experienced in-game as we progress game engines, games and hardware year after year.

 

The solution - VXGI acceleration

NVIDIA engineers actually came up with fast approximate method to compute global illumination dynamically in 2011. It's still computationally intensive and thus new software algorithms and special hardware acceleration built into the second generation Maxwell architecture ensure that dynamic global illumination does indeed take off this time round.

Abbreviated as VXGI, this is short for Voxel Global Illumination. Since indirect lighting is often unfocused and that the first few bounces of the photons from the originating light source have the most energy present, VXGI's design goals were to keep these two aspects in mind to best represent indirect global real-time lighting.

VXGI is executed in a three-step process implementing the Voxel Cone Tracing technique which we'll briefly sum up here:-

Step 1: Voxelization

While a "pixel" represents 2D point in space, a "voxel" represents a small cube of 3D space. Since realism of a scene is paramount to how light reflects off objects, which is indirect lighting, it's important to capture this information in all three dimensions. Just like how "rasterization" determines the value of a scene in 2D space for every pixel, "voxelization" is the equivalent of that as it determines the value of a scene at every voxel.

Using VXGI, two aspects of information is captured a each voxel - the fraction of the voxel that contains an actual object and the properties of light (such as direction and intensity) coming from it, which includes indirect light bouncing off it. The resulting voxel coverage calculation is represented in a visualization such as the following showing how a rasterized image appears when voxelized.

This is a summary of Voxelization.

On the left is a simple scene, while on the right is a visualization of the voxelized result. Obviously, empty voxels aren't drawn, while those fully covered are in red and partially covered are represented by a shade between blue (minimal coverage like the edge of an intersection that’s not fully covering a voxel) and red (fully covered).

Since fractional coverage in each voxel needs to be determined with high accuracy to ensure the voxelized 3D grid represents as much of the original 3D object properly, NVIDIA came up with a hardware feature called "Conservative Raster" where a pixel is considered covered even if any part of the pixel footprint is covered by the object - and it doesn't have to cover pixel center to determine the coverage sample.

 

Step 2: Light Injection

This stage calculates the amount of direct light reflected by the voxels' physical geometry by factoring in a material's opacity, emissive and reflective properties.

Different light sources striking on various materials will result in differing level of reflected light.

For example, the left material is solid, while that on the right is a mirror. Further to that are the differing light sources that also affect the amount of reflected light and even color.

In this stage, there's a need to analyze the same scene from several view points such as each face of the voxel cube and different light sources to determine coverage and lighting levels for each voxel. Known as multi-projection, NVIDIA added a hardware feature called "Viewport Multicast" to reduce geometry shader overheads and to speed up multi-projection.

In this example, the direct light source is indicated by the yellow dot, which causes light to strike the white walls and some surface of the red/green boxes. Each surface will then reflect a certain amount of light based on their color and material properties.

 

Step 3: Final Gather

The amount of indirect light gathered in this scene after VXGI based computation.

The last stage is to rasterize the scene with the final and more accurate voxel data structure that can be used in its lighting calculations along with other structures such as shadow maps and more.

VXGI approaches the final calculation of indirect lighting with cone tracing - an approximation of secondary rays used in traditionally more computationally intensive ray tracing method for a realistic approximate of global illumination.

This graphical representation of cone tracing captures the essence of reducing the complexity of secondary rays and its related calculation from traditional ray tracing techniques.

The intensity in which real-time reflections are calculated on a glossy curved surface is most punishing when using traditional ray tracing as hundreds of thousands of scattered secondary rays need to be computed for each ray that bounces off the surface. Cone tracing replaces all of that with a handful of voxel cones traced through the voxel grid.

The same approach can be used with fewer cones for specular lighting too. The algorithm used is scalable based on the scene at hand, be it whether image quality or performance takes precedence. As such, cone tracing enables global illumination to be computed at high frame rates in real time to render glossy, metallic, curved surfaces and much more.

A variety of voxel cones can be used to help reproduce differing forms of diffuse and specular lighting.

 

The result and debunking a myth

According to NVIDIA, due to these added functions via VXGI, they've seen a 3x speed-up on the voxelizaion process on a popular global illumination test scene with the GeForce GTX 980 (as opposed to disabling these features on the same hardware).

To put VXGI into an actual use case scenario and the perfect demonstration of its capability, NVIDIA engineers attempted to digitally recreate a scene from Apollo 11's moon landing mission and to answer why and how certain traits were seen on the photograph taken on the moon - the very same photograph that was subjected to a number of conspiracy theories of how the entire moon landing might have been staged.

Based on schematics of the Moon Lander, the photograph and the knowledge of the materials and their properties that need to be considered for modelling the scene virtually as it was 45 years ago on the day of the landing, including the sun's position, moon's atmosphere and much more, NVIDIA has successfully recreated this faithful scene using VXGI that's accelerated on the second generation Maxwell architecture. The NVIDIA demo team used the Unreal Engine 4 game engine to build this scene with real-time global illumination as the point of view changes when you zoom in and out, rotate around and about the Lander at your will.

This is the scene that was faithfully recreated by NVIDIA's engineers with full details on all materials, attributes and lighting information.

To prove that it's not just a pretty photo, here's a look at the scene's voxel data.

Two of the most discussed and debated aspects from this scene are a bright spot of light near Buzz Aldrin and how well lit he seemed to be, as well as the photo not showing any stars in the sky. From the demo team's recreation, both aspects were conclusively debunked. It was found that Neil Armstrong's suit was reflecting quite a bit of light that further helped illuminate Aldrin as he was getting off the ladder. Meanwhile, star-less sky was easily accounted from the camera exposure used to capture what's taking place on the moon's surface; the demo team digitally manipulated the exposure setting for the scene and actually found the stars!

If you would like to hear more technical details and experience how it was demonstrated to us, check out this video clip of Tony Tamasi, NVIDIA's senior vice president of content and technology, as he explains to the tech media why VXGI and debunking the Apollo 11 landing myths are both major accomplishments:-

With that, NVIDIA has successfully answered a number of anomalies seen on the famous photograph and to answer why certain reflected light sources appear as they were captured. In short, it's a true leap in real-time global illumination that's handled on a single GPU effortlessly.

More information and the impetus for this project from the demo team can be found here.

 

VXGI support and the reality

While all this certainly sounds good in theory, it will be sometime to come before leading games start to tap upon the benefits of VXGI. First and foremost, game engines have to be designed with this in mind and there isn't an engine that currently supports it out of the box. NVIDIA is however closely working with major game engine developers to add this support to help progress the next stage in game realism.

Unreal Engine 4 is closest to having this support as both EPIC and NVIDIA are working together to create a variant of the UE4 engine that supports VXGI. This doesn't necessarily mean all future UE4 based titles will have VXGI baked in as it depends which fork is being implemented by the game developer. Obviously, the VXGI enabled edition seems better for all parties involved, but it will require newer gaming hardware like the GeForce GTX 980 to realize its usefulness as the second generation Maxwell architecture has hardware acceleration for it.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.