Preview: NVIDIA GeForce GTX Titan X performance
- < Prev
-
Page 1 of 2 - NVIDIA GeForce GTX Titan X performance preview
Page 1 of 2
- Next >
NVIDIA GeForce GTX Titan X performance preview
The GTX Titan Z was a monster card. Unveiled at NVIDIA's GPU Technology Conference (GTC) last year, the card featured a pair of Kepler GPUs with a mind-boggling 12GB of video memory. Now one year on, a new single-GPU Titan has stomped its way onto the scene. The GeForce GTX Titan X ships with a full implementation of NVIDIA's GM200 GPU based on the Maxwell architecture and like the Titan Z, comes with a whooping 12GB of GDDR5 RAM that will most definitely run the latest DirectX 12.0 games at 4K resolutions without running out of video memory. In terms of video connectivity options, the card has three DisplayPort ports, a HDMI 2.0 port and a dual-link DVI-I port.
The GM200 memory subsystem has a 384-bit memory interface and 7GHz memory clock, which puts its peak memory bandwidth at 336.5GB/s, 50% higher than the GeForce GTX 980. Pixel, vertex and geometry shading duties are handled by its 3072 CUDA cores, while texture filtering is performed by 192 texture units. Its base clock frequency of 1000MHz also effectively amounts to a texture filtering rate of 192 Gigatexels/s, besting the GeForce GTX 980 by a good 33%. The GeForce Titan X also features an impressive 8 billion transistors and the same 250 watt thermal design power (TDP) as the GeForce GTX Titan Black and the GeForce GTX Titan from 2013. NVIDIA appears to have managed to maintain its TDP as in the first GeForce GTX Titan, an impressive feat that PC system builders concerned about power consumption and heat output will definitely welcome.
NVIDIA also designed the GeForce Titan X with overclocking in mind, with a 6-phase power supply for overvoltaging capability to hit those high overclocks and an additional 2-phase power supply for the board's GDDR5 memory. This 6+2 phase design will meet the card's power demands even when overclocked. NVIDIA has also improved on the board's cooling design by moving the voltage-regulation modules (VRMs) closer to the card's cooling elements and increasing airflow to the board's various components, which should provide lower temperatures and help with overclocking. In addition, NVIDIA has used polarized capacitators (POSCAPS) and molded inductors to minimize coil whine and unwanted board noise.
Such a heavy-hitting card must be kept cool, and the GeForce Titan X depends on a copper vapor chamber to cool its GM200 GPU. This vapor chamber is combined with a large, dual-slot aluminium heatsink to help dissipate heat from the chip. A blower-style fan then exhausts hot air through the back of the graphics card and out of the system. NVIDIA's latest cooling solutions have been very quiet and the cooler on the new GeForce Titan X was no different. Even at heavy loads, the card was whisper quiet.
The GeForce Titan X also supports NVIDIA's new graphics technologies that debuted along with the GeForce GTX 980 and its worth going over two of the key ones here. Voxel global illumination (VXGI) is a new technology that allows game developers to render dynamic global illumination in real-time on the GPU with high performance and create more realistic lighting scenarios and scenes. VXGI has even been integrated into Epic Games' new Unreal Engine 4 and is now in the hands of developers. This will enable them to more easily use the technology in their upcoming games, which the GeForce Titan X is in a good position to take advantage of.
The 12GB of video memory also appears justified when you consider NVIDIA's implementation of VR Direct in the GeForce Titan X, another key new technology that NVIDIA is keen to show off. VR content can quickly gobble up huge amounts of video memory, and the GeForce Titan X's sizable 12GB frame buffer should help ensure that it has sufficient memory to tide it over. The company has bet on VR being the next leap forward in gaming and VR Direct's asynchronous time warp feature looks to reduce latency from head rotations, a crucial aspect of a realistic and palatable VR experience. Another feature called VR SLI allows users to combine two GeForce cards and assigns a specific GPU to render the display in each eye, a technique NVIDIA says will also help to reduce latency and increase performance.
Here is a look at how the GeForce Titan X stacks up against other top-end GPUs on the market:
NVIDIA GeForce GTX Titan X | NVIDIA GeForce GTX 980 | NVIDIA GeForce GTX Titan | AMD Radeon R9 295X2 | |
![]() |
![]() |
![]() |
![]() |
|
Core Code |
|
|
|
|
---|---|---|---|---|
GPU Transistor Count |
|
|
|
|
Manufacturing Process |
|
|
|
|
Core Clock |
|
|
|
|
Stream Processors |
|
|
|
|
Stream Processor Clock |
|
|
|
|
Texture Mapping Units (TMUs) |
|
|
|
|
Raster Operator units (ROP) |
|
|
|
|
Memory Clock (DDR) |
|
|
|
|
Memory Bus width |
|
|
|
|
Memory Bandwidth |
|
|
|
|
PCI Express Interface |
|
|
|
|
Power Connectors |
|
|
|
|
Multi GPU Technology |
|
|
|
|
DVI Outputs |
|
|
|
|
HDMI Outputs |
|
|
|
— |
DisplayPort Outputs |
|
|
|
|
HDCP Output Support |
|
|
|
|
It looks like NVIDIA has produced yet again another card to take the single-GPU performance crown. Billing the card as the “most advanced GPU the world has ever seen”, NVIDIA has clearly placed every confidence in the reference design of its new GPU, and we put the card through its paces in our lab to find out exactly how it stacks up against existing high-performance cards.
- < Prev
-
Page 1 of 2 - NVIDIA GeForce GTX Titan X performance preview
Page 1 of 2
- Next >