Feature Articles

What you need to know about ray tracing and NVIDIA's Turing architecture

By Koh Wanzi - 14 Sep 2018

What is ray tracing?

NVIDIA GeForce RTX 2080. (Image Source: NVIDIA)

How does ray tracing compare to rasterization?

NVIDIA’s new Turing cards are supposed to represent some sort of paradigm shift. They feature dedicated RT cores for real-time ray tracing, a first for consumer graphics, and the company is claiming massive performance and visual improvements if developers code their games for the new hardware.

But what is ray tracing really? It sounds like some esoteric graphics rendering technique, but you’ve probably already enjoyed the results of it on the big screen. Modern movies use ray tracing to generate or bolster special effects, which is how you get photorealistic shadows, reflections, and refractions. It’s how filmmakers create blazing conflagrations and blend them with the real world.

Ray tracing isn't new, but doing it real-time on a consumer GPU is. (Image Source: NVIDIA)

In a nutshell, ray tracing involves following the path of light beams backward from your eye to the objects that the light ray interacts with. However, this requires immense computational power, which is why it’s been restricted mostly to the post-production stages of a movie, where filmmakers can take their time to render scenes and take advantage of render farms.

Before Turing, real-time ray tracing was a pipe dream for games. But if NVIDIA has its way, Turing could pave the way for a new generation of games that can generate photorealistic scenes in real-time.


How do games today do it?

Video games only have a fraction of a second to render a scene, so games today generally rely on something called rasterization instead. This is a method of translating 3D objects onto a 2D screen, and technological advancements mean that it’s actually pretty good.

In rasterization, 3D models of objects are comprised of a mesh of virtual triangles or polygons. Each corner of a triangle is referred to as a vertice, which in turn intersects with the vertices of other triangles of different sizes and shapes.

Each vertice retains a wealth of information, including details on color, texture, and its position in space. The triangles that form these 3D models are then converted into pixels on a 2D screen, with each pixel taking on an initial color value from the data in the corresponding vertex.

Next, further processing or “shading” is performed to change the pixel color depending on how lights in the scene hit it. Textures are also applied to the pixel, which factors into the final color applied.

Ray tracing is demanding, but rasterization is no walk in the park either. Millions of polygons could be used for all the object models in just one scene, and each frame could be refreshed up to 90 times per second, which is a lot of pixels to process in a very short time.


Why is ray tracing different?

Ray tracing hews more closely to how light works in the real world. Light rays hit 3D objects and bounce from one object to another before reaching our eyes. On the other hand, light may also be obscured by some objects, creating shadows. Then there is reflection and refraction, which can be challenging to simulate in computer graphics.

Ray tracing works backward from your eye, retracing the path of light from your eye to the object. In effect, it traces a light ray’s path through each pixel on a 2D screen back into a 3D model of the scene.

A look at ray tracing versus rasterization. (Image Source: NVIDIA)

It can also capture reflection, shadows, and refraction by mining color and lighting information at the intersection point between a light ray and an object. This data contributes to determining pixel color and the level of illumination, and if a ray bounces off or passes through the surfaces of different objects before reaching the light source, color and lighting information from all these objects contribute to the final pixel color as well.

These techniques, along with further refinements along the years, are why ray tracing is the technique of choice for movie-making today. It effectively captures the way light works, which is why you get scenes that are truer to life, replete with details like softer shadows and light that interacts more realistically with the environment.


What does this mean for games?

NVIDIA has so far announced 21 games that take advantage of some portion of the new Turing architecture, but only 11 actually utilize the RT cores for real-time ray tracing. Metro Exodus provides an illuminating example, and with ray tracing enabled, the rooms are far darker than with ray tracing off. In practice, this could give developers more freedom to tweak the atmospheric lighting to create their desired mood, in addition to concealing enemies in the shadows.

Similarly, in Shadow of the Tomb Raider, the shadows blend together far more realistically, and you can distinguish between different gradations of shadow, where the words penumbra and umbra take on real meaning.

Put simply, you games stand to look far better with NVIDIA’s RTX technology. Unfortunately, the Turing cards are pricey, and it’ll be some years before a significant portion of consumers even own a card capable of taking advantage of ray tracing, which is exactly what needs to happen before more developers get on board.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.