The NVIDIA Editor’s Day took place at CES 2025 in Las Vegas. (Image: HWZ)

NVIDIA GeForce RTX 50 Series: How Blackwell's Neural Rendering and DLSS 4 are shaping next-gen gaming

At the NVIDIA Editor's Day, we got to see some really cool stuff that the new RTX 50 Series are capable of.
#nvidia #blackwell #geforcertx50series

At the NVIDIA Editor’s Day at CES 2025 in Las Vegas, which took place after CEO Jensen Huang unveiled the next-generation GeForce RTX 50 Series, tech reviewers and members of the media (including yours truly) had the chance to learn more about the Blackwell architecture, features and technology from the folks behind its development and innovation. There were other notable highlights and presentations too, including NVIDIA’s interesting AI blueprint for RTX and how Blackwell can elevate a content creator’s home studio thanks to its AI power.

These will go up in the coming days, but right now I want to share the two key features that define the Blackwell architecture of the RTX 50 Series and the ways it will change how games are rendered and experienced, not just this generation but potentially the direction of the entire industry: Neural Rendering and DLSS 4. NVIDIA’s track record of innovation has brought us technologies like programmable shaders and ray tracing. Still, these new features suggest that we’re now entering an era where machine learning does more than optimise – it actively redefines the graphics pipeline.

Blackwell is very much about power and AI. (Image: NVIDIA)

Blackwell is very much about power and AI. (Image: NVIDIA)

Neural rendering sits at the core of the green company’s Blackwell vision. For years, graphics rendering followed a predictable path: create detailed assets, optimise them for performance, and render them as faithfully as possible under the hardware constraints of the time. Neural rendering fundamentally rewrites that equation. Rather than relying solely on traditional shaders or fixed processes to render scenes, neural rendering introduces machine learning into the pipeline, enabling dynamic, AI-driven decision-making during rendering.

At the heart of this approach is NVIDIA’s new API called Cooperative Vectors, developed in collaboration with Microsoft. For the first time, developers can directly access Tensor Cores through a compute shader within a graphics API. This isn’t a small feat. Until now, leveraging Tensor Cores was primarily the domain of CUDA workloads, which limited how AI could be applied to graphics. By opening Tensor Core access to the rendering pipeline, NVIDIA is effectively handing developers the keys to a new level of control and creativity.

RTX Neural Shaders is a boon for developers. (Image: NVIDIA)

RTX Neural Shaders is a boon for developers. (Image: NVIDIA)

Less memory requirement is always a good thing. (Image: NVIDIA)

Less memory requirement is always a good thing. (Image: NVIDIA)

The potential applications of neural rendering are vast, and NVIDIA has already demonstrated several key use cases. Neural texture compression, for instance, highlights the efficiencies this technology brings to game development. Texture compression has long been a bottleneck, forcing developers to balance fidelity with storage and memory constraints. Neural texture compression offers up to a 7:1 compression ratio over traditional block-compressed formats, significantly reducing memory requirements without sacrificing visual quality. In one demonstration in the room, a scene with standard materials required 47MB of memory, while its neurally compressed counterpart reduced this to just 16MB.

Neural materials further illustrate the power of this technology. Traditionally, material shaders can range from a few dozen lines of code for simple real-time assets to hundreds of thousands for film-grade materials. Neural materials condense this complexity into a neural space, representing textures and shaders as latent features. This approach not only reduces the computational overhead but also enables more intricate and lifelike rendering, even for challenging materials in big-budget CGI films. For instance, silver might have layers of dust and fingerprints, while silk is multi-coloured and those shifts depend on the angle you view it from. Making that real-time is very challenging. NVIDIA’s demonstration of neural materials showcased how they can bring out subtle details and vibrant, angle-dependent colour shifts that would typically be absent in real-time rendering.

Another significant application of neural rendering is in lighting, where the Neural Radiance Cache comes into play. Unlike traditional methods that rely heavily on precomputed lighting or costly real-time calculations, this technology trains a model in real time using the gamer’s GPU. The cache stores data about how light interacts with the scene, enabling virtually unlimited bounces and significantly improving global illumination quality. It’s not just about brighter lights or deeper shadows – the subtleties of light transport are captured in a way that adds depth and realism to every frame.

Games like Half-Life 2 gets a new lease of life with RTX Remix. (Image: NVIDIA)

Games like Half-Life 2 gets a new lease of life with RTX Remix. (Image: NVIDIA)

Hair rendering remains an obsession. (Image: NVIDIA)

Hair rendering remains an obsession. (Image: NVIDIA)

The advances extend beyond static materials and lighting. Rendering skin, for instance, is a big challenge. Typical representations in games treat objects as impermeable to light, which work for materials like wood or metal. However, with translucent materials, light penetrates the surface, scatters underneath, and exits elsewhere. Borrowing technology from film rendering, such as Disney’s subsurface scattering algorithm, NVIDIA has brought this into real-time with RTX Skin. In a Half-Life 2 RTX demo on stage where an NVIDIA rep showed the iconic headcrab enemy with RTX Skin turned on, the white highlights appearing around thinner areas like the legs were immediately noticeable as compared to when RTX Skin was turned off. It’s particularly impressive in motion.

Another tough nut NVIDIA has cracked is hair rendering. Hair has long been the bane of real-time graphics, requiring either impractical amounts of geometry or visual shortcuts that break immersion. Its Linear Swept Sphere primitive in Blackwell offers a concise representation of hair strands, reducing storage requirements without compromising detail. This approach makes it easier to render realistic hair in games where polygon counts have exploded from thousands to billions over the past two decades. NVIDIA’s RTX Mega Geometry furthers this by allowing developers to use detailed meshes without resorting to proxy meshes, offering uncompromising detail even in complex ray-traced scenes.

DLSS 4 is based on a more efficient transformer-based neural network architecture. (Image: NVIDIA)

DLSS 4 is based on a more efficient transformer-based neural network architecture. (Image: NVIDIA)

While neural rendering offers a glimpse into the future of dynamic, AI-driven rendering, DLSS 4 demonstrates what’s possible when machine learning directly enhances the player’s experience. Since its introduction in 2018 and despite early growing pains, DLSS has evolved into an incredible tool for both developers and gamers. DLSS 4, however, represents a complete overhaul, with NVIDIA adopting a transformer-based neural network architecture. Unlike earlier convolutional neural networks used in earlier versions of DLSS, transformers dynamically focus computational resources on challenging areas of an image. This scalability allows DLSS 4 models to use four times the compute of previous versions, resulting in sharper, more stable visuals.

DLSS 4’s benefits are immediately apparent in ray reconstruction. Traditional ray tracing, while groundbreaking, has limitations, particularly in complex lighting scenarios. DLSS 4’s transformer model excels at resolving these challenges. In one demo, a chain-link fence occluding a house appeared distorted in earlier versions, but the transformer model resolves it clearly. Another example showed wires in the distance that flickered with older DLSS models but remained stable with DLSS 4. The same holds true for fast-moving objects, like a spinning fan, where ghosting is eliminated.

DLSS has never been this smart before. (Image: NVIDIA)

DLSS has never been this smart before. (Image: NVIDIA)

We can't wait to show you our benchmark results next week when the review embargo is lifted. (Image: NVIDIA)

We can't wait to show you our benchmark results next week when the review embargo is lifted. (Image: NVIDIA)

Super resolution is another area where DLSS 4 shines. By upscaling images with greater fidelity than ever before, it allows games to achieve a level of detail previously reserved for native rendering – without the associated performance hit. This capability is complemented by the introduction of multi-frame generation, which pushes efficiency to new heights. DLSS 4 can now generate three AI-created frames for every two traditional frames rendered, meaning that 15 out of every 16 pixels in a game are generated by DLSS. The result is a rendering process that’s not only faster but also less resource-intensive, freeing up headroom for additional graphical effects or higher frame rates.

The impact of DLSS 4’s efficiency gains was perhaps most striking in NVIDIA’s Cyberpunk 2077 demo on the stage. With DLSS disabled, the game ran at 27fps in 4K. Enabling DLSS 4 not only boosted performance to 250fps but also enhanced image quality. Reflections, textures, and lighting all appeared sharper and more detailed. Gamers no longer need to choose between fidelity and performance anymore.

The technology also integrates seamlessly with NVIDIA’s Reflex, which gets an upgrade with Reflex 2. The new version introduces Framework, a technique that samples inputs just before rendering to deliver near-instant responsiveness. While challenges like disocclusion – areas that weren’t previously visible to the camera – remain, NVIDIA’s in-painting algorithms help ensure that gaps are filled convincingly. Reflex 2’s benefits are particularly noticeable in competitive shooters like Valorant and The Finals, where even a few milliseconds of latency can make the difference between victory and defeat for professional gamers.

NVIDIA App is suddenly a killer tool for GeForce RTX card owners. (Image: NVIDIA)

NVIDIA App is suddenly a killer tool for GeForce RTX card owners. (Image: NVIDIA)

What makes DLSS 4 especially compelling is its accessibility. At launch, 75 games and applications will support the technology, with more surely to follow. But NVIDIA saved their ultimate surprise for last, revealing that they will soon add a new DLSS Override feature to their NVIDIA App. This allows users to force the use of the newest DLSS model in existing games that don’t natively use them, as long as the game already supports DLSS in the first place.

Both neural rendering and DLSS 4 are set to redefine real-time graphics, enabling developers to create richer, more immersive worlds and allowing gamers to experience fantastic performance without compromising on visual quality. Neural rendering introduces a level of dynamism and efficiency that was previously unimaginable, while DLSS 4 finally demonstrates the tangible benefits of AI in delivering both performance and fidelity. Together, they position the RTX 50 Series a true generational leap forward rather than an incremental upgrade over its predecessors. It’s going to be fascinating to see how developers and gamers harness these technologies to create experiences we’ve yet to imagine.

Share this article