Product Listing

Preview: NVIDIA GeForce GTX Titan X performance

By Wong Chung Wee & Koh Wanzi - 18 Mar 2015

NVIDIA GeForce GTX Titan X performance preview

NVIDIA GeForce GTX Titan X (Image Source: NVIDIA)

The GTX Titan Z was a monster card. Unveiled at NVIDIA's GPU Technology Conference (GTC) last year, the card featured a pair of Kepler GPUs with a mind-boggling 12GB of video memory. Now one year on, a new single-GPU Titan has stomped its way onto the scene. The GeForce GTX Titan X ships with a full implementation of NVIDIA's GM200 GPU based on the Maxwell architecture and like the Titan Z, comes with a whooping 12GB of GDDR5 RAM that will most definitely run the latest DirectX 12.0 games at  4K resolutions without running out of video memory. In terms of video connectivity options, the card has three DisplayPort ports, a HDMI 2.0 port and a dual-link DVI-I port.

The GM200 memory subsystem has a 384-bit memory interface and 7GHz memory clock, which puts its peak memory bandwidth at 336.5GB/s, 50% higher than the GeForce GTX 980. Pixel, vertex and geometry shading duties are handled by its 3072 CUDA cores, while texture filtering is performed by 192 texture units. Its base clock frequency of 1000MHz also effectively amounts to a texture filtering rate of 192 Gigatexels/s, besting the GeForce GTX 980 by a good 33%. The GeForce Titan X also features an impressive 8 billion transistors and the same 250 watt thermal design power (TDP) as the GeForce GTX Titan Black and the GeForce GTX Titan from 2013. NVIDIA appears to have managed to maintain its TDP as in the first GeForce GTX Titan, an impressive feat that PC system builders concerned about power consumption and heat output will definitely welcome.

The GTX Titan X ships with a full implementation of GM200, while its display and video engines are unchanged from the GTX 980's GM204 GPU. (Image Source: NVIDIA)

NVIDIA also designed the GeForce Titan X with overclocking in mind, with a 6-phase power supply for overvoltaging capability to hit those high overclocks and an additional 2-phase power supply for the board's GDDR5 memory. This 6+2 phase design will meet the card's power demands even when overclocked. NVIDIA has also improved on the board's cooling design by moving the voltage-regulation modules (VRMs) closer to the card's cooling elements and increasing airflow to the board's various components, which should provide lower temperatures and help with overclocking. In addition, NVIDIA has used polarized capacitators (POSCAPS) and molded inductors to minimize coil whine and unwanted board noise.

Such a heavy-hitting card must be kept cool, and the GeForce Titan X depends on a copper vapor chamber to cool its GM200 GPU. This vapor chamber is combined with a large, dual-slot aluminium heatsink to help dissipate heat from the chip. A blower-style fan then exhausts hot air through the back of the graphics card and out of the system. NVIDIA's latest cooling solutions have been very quiet and the cooler on the new GeForce Titan X was no different. Even at heavy loads, the card was whisper quiet.

The GeForce Titan X also supports NVIDIA's new graphics technologies that debuted along with the GeForce GTX 980 and its worth going over two of the key ones here. Voxel global illumination (VXGI) is a new technology that allows game developers to render dynamic global illumination in real-time on the GPU with high performance and create more realistic lighting scenarios and scenes. VXGI has even been integrated into Epic Games' new Unreal Engine 4 and is now in the hands of developers. This will enable them to more easily use the technology in their upcoming games, which the GeForce Titan X is in a good position to take advantage of. 

The 12GB of video memory also appears justified when you consider NVIDIA's implementation of VR Direct in the GeForce Titan X, another key new technology that NVIDIA is keen to show off. VR content can quickly gobble up huge amounts of video memory, and the GeForce Titan X's sizable 12GB frame buffer should help ensure that it has sufficient memory to tide it over. The company has bet on VR being the next leap forward in gaming and VR Direct's asynchronous time warp feature looks to reduce latency from head rotations, a crucial aspect of a realistic and palatable VR experience. Another feature called VR SLI allows users to combine two GeForce cards and assigns a specific GPU to render the display in each eye, a technique NVIDIA says will also help to reduce latency and increase performance.

Here is a look at how the GeForce Titan X stacks up against other top-end GPUs on the market:

NVIDIA GeForce GTX Titan X and competitive SKUs compared
  NVIDIA GeForce GTX Titan X NVIDIA GeForce GTX 980 NVIDIA GeForce GTX Titan AMD Radeon R9 295X2
  NVIDIA GeForce GTX Titan X NVIDIA GeForce GTX 980 NVIDIA GeForce GTX Titan AMD Radeon R9 295X2
Core Code
  • GM200
  • GM204
  • GK110
  • Vesuvius
GPU Transistor Count
  • 8 billion
  • 5.2 billion
  • 7.1 billion
  • 12.4 billion (2 x 6.2 billion)
Manufacturing Process
  • 28nm
  • 28nm
  • 28nm
  • 28nm
Core Clock
  • 1000MHz (Boost: 1075MHz)
  • 1126MHz (Boost: 1216MHz)
  • 836MHz
  • up to 1020MHz
Stream Processors
  • 3072
  • 2048
  • 2688
  • 5632 (2 x 2816)
Stream Processor Clock
  • 1000MHz
  • 1126MHz
  • 836MHz
  • up to 1020MHz
Texture Mapping Units (TMUs)
  • 192
  • 128
  • 224
  • 352
Raster Operator units (ROP)
  • 96
  • 64
  • 48
  • 128
Memory Clock (DDR)
  • 7010MHz
  • 7010MHz
  • 6008MHz
  • 5000MHz
Memory Bus width
  • 384-bit
  • 256-bit
  • 384-bit
  • 2 x 512-bit
Memory Bandwidth
  • 336.5 GB/s
  • 224 GB/s
  • 288.4 GB/s
  • 640GB/s
PCI Express Interface
  • PCI Express 3.0
  • PCI Express 3.0
  • PCI Express 3.0
  • PCIe 3.0 x16
Power Connectors
  • 1 x 6-pin, 1 x 8-pin
  • 2 x 6-pin
  • 1 x 6-pin, 1 x 8-pin
  • 2 x 8-pin
Multi GPU Technology
  • SLI
  • SLI
  • SLI
  • AMD CrossFireX
DVI Outputs
  • 1
  • 1
  • 2
  • 1
HDMI Outputs
  • 1
  • 1
  • 1
DisplayPort Outputs
  • 3
  • 3
  • 1
  • 4
HDCP Output Support
  • Yes
  • Yes
  • Yes
  • Yes

It looks like NVIDIA has produced yet again another card to take the single-GPU performance crown. Billing the card as the “most advanced GPU the world has ever seen”, NVIDIA has clearly placed every confidence in the reference design of its new GPU, and we put the card through its paces in our lab to find out exactly how it stacks up against existing high-performance cards.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.