Preview: NVIDIA GeForce GTX Titan X performance

The NVIDIA GeForce GTX Titan X is a card to be reckoned with; it features the latest Maxwell GPU with 3,072 CUDA cores and 12GB of GDDR5 video memory. It promises to make short work of the most demanding PC game titles and let the gamers immerse themselves in visual clarity of 4K gaming. Let's take a quick look at what it's capable of!

NVIDIA GeForce GTX Titan X (Image Source: NVIDIA)

NVIDIA GeForce GTX Titan X (Image Source: NVIDIA)

The GTX Titan Z was a monster card. Unveiled at NVIDIA's GPU Technology Conference (GTC) last year, the card featured a pair of Kepler GPUs with a mind-boggling 12GB of video memory. Now one year on, a new single-GPU Titan has stomped its way onto the scene. The GeForce GTX Titan X ships with a full implementation of NVIDIA's GM200 GPU based on the Maxwell architecture and like the Titan Z, comes with a whooping 12GB of GDDR5 RAM that will most definitely run the latest DirectX 12.0 games at  4K resolutions without running out of video memory. In terms of video connectivity options, the card has three DisplayPort ports, a HDMI 2.0 port and a dual-link DVI-I port.

The card has three DisplayPort ports, a HDMI 2.0 port and a dual-link DVI-I port.

The card has three DisplayPort ports, a HDMI 2.0 port and a dual-link DVI-I port.

The GM200 memory subsystem has a 384-bit memory interface and 7GHz memory clock, which puts its peak memory bandwidth at 336.5GB/s, 50% higher than the GeForce GTX 980. Pixel, vertex and geometry shading duties are handled by its 3072 CUDA cores, while texture filtering is performed by 192 texture units. Its base clock frequency of 1000MHz also effectively amounts to a texture filtering rate of 192 Gigatexels/s, besting the GeForce GTX 980 by a good 33%. The GeForce Titan X also features an impressive 8 billion transistors and the same 250 watt thermal design power (TDP) as the GeForce GTX Titan Black and the GeForce GTX Titan from 2013. NVIDIA appears to have managed to maintain its TDP as in the first GeForce GTX Titan, an impressive feat that PC system builders concerned about power consumption and heat output will definitely welcome.

The GTX Titan X ships with a full implementation of GM200, while its display and video engines are unchanged from the GTX 980's GM204 GPU. (Image Source: NVIDIA)

The GTX Titan X ships with a full implementation of GM200, while its display and video engines are unchanged from the GTX 980's GM204 GPU. (Image Source: NVIDIA)

NVIDIA also designed the GeForce Titan X with overclocking in mind, with a 6-phase power supply for overvoltaging capability to hit those high overclocks and an additional 2-phase power supply for the board's GDDR5 memory. This 6+2 phase design will meet the card's power demands even when overclocked. NVIDIA has also improved on the board's cooling design by moving the voltage-regulation modules (VRMs) closer to the card's cooling elements and increasing airflow to the board's various components, which should provide lower temperatures and help with overclocking. In addition, NVIDIA has used polarized capacitators (POSCAPS) and molded inductors to minimize coil whine and unwanted board noise.

Such a heavy-hitting card must be kept cool, and the GeForce Titan X depends on a copper vapor chamber to cool its GM200 GPU. This vapor chamber is combined with a large, dual-slot aluminium heatsink to help dissipate heat from the chip. A blower-style fan then exhausts hot air through the back of the graphics card and out of the system. NVIDIA's latest cooling solutions have been very quiet and the cooler on the new GeForce Titan X was no different. Even at heavy loads, the card was whisper quiet.

The GeForce Titan X also supports NVIDIA's new graphics technologies that debuted along with the GeForce GTX 980 and its worth going over two of the key ones here. Voxel global illumination (VXGI) is a new technology that allows game developers to render dynamic global illumination in real-time on the GPU with high performance and create more realistic lighting scenarios and scenes. VXGI has even been integrated into Epic Games' new Unreal Engine 4 and is now in the hands of developers. This will enable them to more easily use the technology in their upcoming games, which the GeForce Titan X is in a good position to take advantage of. 

The 12GB of video memory also appears justified when you consider NVIDIA's implementation of VR Direct in the GeForce Titan X, another key new technology that NVIDIA is keen to show off. VR content can quickly gobble up huge amounts of video memory, and the GeForce Titan X's sizable 12GB frame buffer should help ensure that it has sufficient memory to tide it over. The company has bet on VR being the next leap forward in gaming and VR Direct's asynchronous time warp feature looks to reduce latency from head rotations, a crucial aspect of a realistic and palatable VR experience. Another feature called VR SLI allows users to combine two GeForce cards and assigns a specific GPU to render the display in each eye, a technique NVIDIA says will also help to reduce latency and increase performance.

Here is a look at how the GeForce Titan X stacks up against other top-end GPUs on the market:

[hwzcompare]

[products=498700,474225,373666,461592]

[width=175]

[caption=NVIDIA GeForce GTX Titan X and competitive SKUs compared]

[showprices=0][/hwzcompare]

It looks like NVIDIA has produced yet again another card to take the single-GPU performance crown. Billing the card as the “most advanced GPU the world has ever seen”, NVIDIA has clearly placed every confidence in the reference design of its new GPU, and we put the card through its paces in our lab to find out exactly how it stacks up against existing high-performance cards.

Test Setup

These are the specifications of our graphics testbed:

  • Intel Core i7-3960X (3.3GHz)
  • ASUS P9X79 Pro (Intel X79 chipset) Motherboard
  • 4 x 2GB DDR3-1600 G.Skill Ripjaws Memory
  • Seagate 7200.10 200GB SATA hard drive (OS)
  • Western Digital Caviar Black 7200 RPM 1TB SATA hard drive (Benchmarks + Games)
  • Windows 7 Ultimate SP1 64-bit

We were supplied NVIDIA GeForce drivers, version 347.84 and we highly suspect that there will be future updates soon in order to fine tune the new GPU's performance. We have included Middle Earth: Shadow of Mordor as our new gaming benchmark, in order to test the graphics compute of the new GPU. For the new game's ultra HD textures to be properly rendered, the recommended graphics card needs to have at least 6GB of video memory. Also, the resolution settings of the game allow us to "scale up" despite the lack of a 4K ultra HD monitor. Our Dell UltraSharp U3011 monitor has a maximum resolution of 2560 x 1600 pixels; however, we were able to set video resolutions to 3840 x 2400 pixels (150%) and 5120 x 3200 pixels (200%) during our benchmark tests.

We rounded up the most powerful graphics cards on the market and there was a lone representative from AMD, the Radeon R9 295X2. The said AMD card sports pair of Vesuvius graphics core, and comes with 8GB of GDDR5 video memory. The other dual-GPU card we enlisted is our "simulated" NVIDIA GeForce GTX Titan Z. We installed a pair of GeForce GTX 780 Ti cards in SLI configuration. But we were only able to throttle each of the card's base clock speed to 915MHz; this figure is about 30% higher than the official clock speed of the actual GeForce GTX Titan Z. As a result, our "simulated" GeForce GTX Titan Z can be seen as an overclocked version. Another discrepancy is the amount of VRAM as our pair of SLI-ed 780 Ti has 6GB of video memory collectively.

The full lineup of graphics cards are listed below:-

  • NVIDIA GeForce GTX Titan X 12GB GDDR5 (ForceWare 374.84)
  • Gigabyte GeForce GTX 980 G1 Gaming 4GB GDDR5 (ForceWare 374.84)
  • NVIDIA GeForce GTX Titan Z (ForceWare 374.84)
  • ASUS Radeon R9 295X2 8GB GDDR5 (AMD Catalyst 14.4)

Benchmarks

Again, here's the list of benchmarks that we'll be using for our assessment:-

  • Futuremark 3DMark 2013
  • Middle Earth: Shadow of Mordor

3DMark 2013 was also used for our temperature and power consumption tests.

3DMark 2013 Results

For this synthetic benchmark, the GeForce Titan X's performance was lackluster as it only managed to beat the Gigabyte GTX 980 card by about 20% or so. Against the dual-GPU cards, the GeForce Titan X trailed behind by a margin that ranged from 12% to as high as 22%. Its 12GB of video memory didn't narrow its losses, even for the Fire Strike Ultra test. It could be a case of non-optimized video drivers, like our initial experience with the GeForce GTX 980 card, which affected its performance.

Middle Earth: Shadow of Mordor Results

For this gaming benchmark, at the resolution of 2560 x 1600 pixels, it was the NVIDIA cards ruled the roost. However, at the upscaled resolutions, we can see the game hasn't been optimized for SLI configurations as the average frame rates of our simulated GeForce GTX Titan Z card plummeted to to below 20fps! But as a single GPU discrete graphics card, the GeForce Titan X can take credit for being able to handle the game at Ultra settings with a resolution of 3840 x 2400 pixels, which actually over 840,000 pixels or 10% more than the consumer 4K Ultra HD resolution of 3840 x 2160 pixels. Hence, expect the GeForce Titan X to run Shadow of Mordor at 4K resolution smoothly. Also, since this is a NVIDIA-backed PC game title, the AMD Radeon HD R9 295X2's performance was rather disappointing.

Temperature and Power Consumption Results

The enclosed cooling system of the GeForce Titan X is very similar to that of the reference GeForce GTX 980. With its blower-style fan and large aluminum heatsink, the card ran at a high of 83 degrees Celsius, which was within our expectations. Bad news for gamers who are expecting GeForce Titan X cards to be launched with custom cooling solutions as NVIDIA will not allow their add-in card partners to do so. In terms of power consumption, the GeForce Titan X leveraged on the advantages of its new Maxwell GPU architecture to keep its power draw at a low of 422W.

  

  

  

Conclusion

The NVIDIA GeForce GTX Titan X is the most powerful, single GPU graphics card. It has a lot of performance potential that is waiting to be unlocked. However, its full capabilities could be crimped by NVIDIA's strict policy with regards to the modifications that can be carried out by their add-in card partners.

The NVIDIA GeForce GTX Titan X is the most powerful, single GPU graphics card. It has a lot of performance potential that is waiting to be unlocked. However, its full capabilities could be crimped by NVIDIA's strict policy with regards to the modifications that can be carried out by their add-in card partners.

For now, the GeForce GTX Titan X's gaming performance isn't too groundbreaking; however, as a single GPU card, it was able to handle the high compute requirements of Shadow of Mordor at a resolution that was slightly higher than mainstream 4K ultra HD resolution. The card is ready for its current role as it dutifully shoulders the responsibility as NVIDIA's ultra-enthusiast SKU of Maxwell GPU architecture. Instead of focusing on counting pixels and rendered graphics details on external displays, perhaps we should turn our attention to virtual reality (VR) for truly immersive gaming. Given the attention VR has garnered in recent months, it is important for industry leaders like NVIDIA and AMD to drive innovations in VR gaming. According to NVIDIA, the Maxwell GPU is ready for this highly-specialized gaming market, due to its ability to reduce latency in virtual reality environments. The support of GPU manufacturers is indispensable to the advancement of the VR environment and technology. The industry's leading VR developer, Facebook's Oculus Rift, requires its head-mounted display (HMD) to be hooked up to a PC with quite a capable graphics card - 75fps at 1080p for current-generation 3D games to be exact.

NVIDIA's singling out of VR as one of the Titan X's strengths will hopefully be just the beginning of a push of VR-capable hardware into the home that can drive smooth and immersive VR experiences for consumers. And as the VR experience continues to demand better hardware to keep up with higher resolutions and frame rates, GPU manufacturers will want to be there to provide realistic and responsive simulations to users.

The GeForce GTX Titan X will definitely not come cheap, but it could help spur on development in the area of hardware to power VR, which is good for consumers as increased competition could help drive prices down. In this respect, the GeForce GTX Titan X may not be just another single-GPU performance champion, but instead the card to offer us a peek into the beginnings of a future where VR enters the mainstream. Since the TDP of the GeForce GTX Titan X is a manageable 250W, fitting a pair of GM200 GPUs onto a single customized PCB shouldn't be such a technical challenge. Hence, we do foresee a dual GM200 GPU-based graphics card in the pipeline. At the moment, the price of the GeForce GTX Titan X is still kept under wraps but we will update this piece of information as soon as it has been revealed.

*Updated at 9.30am*

By the end of NVIDIA's first day keynote presentation at the GPU Tech Conference 2015, it was announced that the new GeForce GTX Titan X will have a suggested retail price of US$999. This is in line with our expectations as previous Titan series cards have also been pegged at this price point. While it may not be economical from a performance per dollar perspective, for a single GPU, this is the fastest there is in the market and as NVIDIA's premium offering, it is squarely aimed at enthusiasts who aren't daunted by the expensive figure and want to build a rig with the newest and the very best in graphics hardware.

Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.

Share this article