Feature Articles

Introducing the NVIDIA GeForce GTX Titan - The True King of Kepler

By James Lu - 19 Feb 2013

Introducing the NVIDIA GeForce GTX Titan - The True King of Kepler

A Brief History of the GK110 GPU

NVIDIA's GK110 die

Last year, in March 2012, NVIDIA unleashed its 28nm Kepler-based architecture on the world with the release of its flagship Kepler GK104 GPU-powered NVIDIA GeForce GTX 680. And while it was, without a doubt, one of the most powerful GPUs we had seen to date, not everyone was satisfied. You see, while GK104 was good, it wasn't GK110. 

Enthusiasts who had been following the progress of Kepler pointed out that GK104's numeral code was an indication of a GPU in the middle of the range, not a flagship product. Speculation was rife that an even more powerful GPU existed, a yet to be revealed GK100/GK110 GPU that would hold the true power of Kepler. Rumors of its specifications and capabilities were whispered, and a synopsis for NVIDIA's (then) upcoming Graphics Technology Conference 2012 seemed to suggest that a new GPU with a staggering "7.1 billion transistors" was on the way. 

The first signs of a Kepler-based 7-billion transistor GPU were spotted in March 2012 - reported exclusively by HardwareZone.

In May that year, at GTC 2012, NVIDIA did in fact unveil the rumored GK110 GPU. But while details were sparse at the time, one thing was clear: GK110 was destined for the Tesla K20, an enterprise-class card costing in the range of US$3000. And with that, the dream of a GK110 GeForce card seemed over.

Until now.

 

Meet the NVIDIA GeForce GTX Titan

By far, the most powerful consumer graphics card ever made.

The GTX Titan takes its name from the Oak Ridge National Laboratory TITAN supercomputer located in Tennessee, USA, which in November 2012 was named the world's fastest supercomputer. Like the 18,668 Tesla K20X GPU accelerators found at the heart of the TITAN supercomputer, each GTX Titan is powered by NVIDIA's GK110 GPU, which comprises of 2688 CUDA Cores and 7.1 billion transistors, making it, by far, the most powerful GPU ever created.

To put that in perspective, that's 75% more CUDA Cores than the the GTX 680, and twice the number of transistors. In fact, the GTX Titan has slightly more transistors than even the dual-GPU GeForce GTX 690!

Core clock speeds on the Titan are set to 836MHz with boost core clock speeds reaching 876MHz using NVIDIA's new GPU Boost 2.0 technology (more on that below).

For those that complained about the GTX 680's lack of memory, the Titan also boasts 50% more memory bandwidth than the GTX 680 with a 384-bit memory interface, and a whopping 6GB VRAM, clocked at 6008MHz DDR (GDDR5). The massive graphics memory would come in handy in the most intense of situations such as surround gaming with 3D enabled on the highest possible game quality settings and when operating in SLI.

The reference card itself measures 266mm in length, which puts it right between the shorter 255mm GTX 680 and the longer 280mm GTX 690. Like the GTX 680, it will feature two DVI ports, one HDMI port and one DisplayPort output.

The GTX Titan will utilize the same ports as the GTX 680.

The GTX Titan will retail in the US for US$999, which is the same launch price as the dual-GPU GTX 690. NVIDIA says that both will share the top spot in its lineup. Local pricing has yet to be released, however, locally the GTX 690 ranges from about S$1480 to S$1580 so you can expect the Titan to cost about the same.  With that kind of pricing and the expected performance throughput that should rival the GTX 690, but with new features that we would be highlighting below, we doubt the GTX 690 would still appeal to enthusiasts. We will of course ascertain its true caliber in our performance review coming up.

Here's a look at how the Titan stacks up against its competition:

NVIDIA GeForce GTX Titan and competitive SKUs compared
  NVIDIA GeForce GTX Titan NVIDIA GeForce GTX 680 (Reference Card) AMD Radeon HD 7970 GHz Edition 3GB DDR5 NVIDIA GeForce GTX 690 (reference card)
  NVIDIA GeForce GTX Titan NVIDIA GeForce GTX 680 (Reference Card) AMD Radeon HD 7970 GHz Edition 3GB DDR5 NVIDIA GeForce GTX 690 (reference card)
Core Code
  • GK110
  • GK104
  • Tahiti XT
  • GK104
GPU Transistor Count
  • 7.1 billion
  • 3.54 billion
  • 4300 million
  • 7080 million
Manufacturing Process
  • 28nm
  • 28nm
  • 28nm
  • 28nm
Core Clock
  • 836MHz
  • 1006MHz
  • 1050MHz
  • 915MHz
Stream Processors
  • 2688
  • 1536 CUDA cores
  • 2048 Stream processing units
  • 3072
Stream Processor Clock
  • 836MHz
  • 1006MHz
  • 1050MHz
  • 915MHz
Texture Mapping Units (TMUs)
  • 224
  • 128
  • 128
  • 256
Raster Operator units (ROP)
  • 48
  • 32
  • 32
  • 256
Memory Clock (DDR)
  • 6008MHz
  • 6008MHz
  • 6000MHz DDR (GDDR5)
  • 6008MHz DDR
Memory Bus width
  • 384-bit
  • 256-bit
  • 384-bit
  • 256-bit
Memory Bandwidth
  • 288.4 GB/s
  • 192.3GB/s
  • 288GB/s
  • 384.4 GB/s (192.2 GB/s per GPU)
PCI Express Interface
  • PCI Express 3.0
  • PCIe ver 3.0 x16
  • PCIe ver 3.0 x16
  • PCIe v3.0 x16
Power Connectors
  • 1 x 6-pin, 1 x 8-pin
  • 2 x 6-pin
  • 1 x 6-pin, 1 x 8-pin
  • 2 x 8-pin
Multi GPU Technology
  • SLI
  • SLI
  • CrossFireX
  • SLI
DVI Outputs
  • 2
  • 2
  • 1 x Dual-Link
  • 3 x Dual-Link
HDMI Outputs
  • 1
  • 1
  • 1
DisplayPort Outputs
  • 1
  • 1
  • 2 (version 1.2 HBR2)
  • 1 x Mini-DisplayPort
HDCP Output Support
  • Yes
  • Yes
  • Yes
  • Yes

 

Under the Hood

For the first time, lower operating temperatures may actually translate to better in-game performance!

Reference cards aren't generally known for high-performance coolers or low acoustic levels, however, due to the complexity of the GK110 GPU, NVIDIA will not be allowing its add-in partners to customize or alter the reference card in any way, much like it did with the dual-GPU NVIDIA GeForce GTX 690.

Fortunately, NVIDIA has spared no expense in the thermal design of the Titan which primarily uses a high performance copper vapor chamber to provide its cooling. The vapor chamber draws heat away from the GPU using an evaporation process similar to how a heatpipe performs, but according to NVIDIA, in a more effective manner.

The heat from the vapor chamber is then dissipated by a large, dual-slot aluminum heatsink and an extended fin stack which extends beyond the base of the vapor chamber to increase the total cooling area. NVIDIA is also using a new thermal interface material from a company called Shin-Etsu that supposedly boasts over two times the performance of the thermal grease used on the GTX 680. Finally, the Titan also has an aluminum back plate to provide additional cooling for the PCB and a single fan to exhaust hot air.

Like the GTX 680, acoustic dampening material is used in the fan to minimize noise.

All of this is important because the GTX Titan will be the first card to use NVIDIA's new and improved GPU Boost 2.0 technology which, instead of using existing power overhead to dynamically adjust clock speeds like the original GPU Boost, now uses thermal thresholds.

 

GPU Boost 2.0

Originally launched last year with the GeForce GTX 680, NVIDIA's GPU Boost technology was designed to dynamically adjust core clock speeds by using excess available power budget to boost performance. The Titan's GPU Boost 2.0 works in a similar fashion, but uses a GPU temperature threshold instead. It will still consider the power drawn by the graphics card, but it becomes a secondary checkpoint and is no longer the main factor. 

Similar to the original GPU Boost, GPU Boost 2.0 will automatically boost the Titan's core clock speed as long as it remains below a certain temperature - by default it is set to 80 degrees Celsius (but it can be raised higher yet). The Titan's GPU will constantly monitor temperature, adjusting core clock speed and voltage on-the-fly to maintain this target temperature. 

The idea for this change is backed by the fact that the GPU can in fact operate safely at higher power draw as long as it's kept within its temperature threshold. Of course, there are upper limits to both and as long as they don't exceed the safe range, consumers stand to gain more.

Unlike the original GPU Boost, NVIDIA will now allow consumers to tweak GPU Boost behavior. As such, gamers that are comfortable with higher temperatures will be able to raise the GPU temperature threshold for higher clock speeds. Of course, higher temperatures also mean increased fan acoustics. Since fan noise is a big deal for many enthusiasts, NVIDIA's upcoming drivers for the Titan will also also allow you to control your preferred temperature target by meddling with fan noise levels (which govern the operating temperature, which in-turn limits your performance boost profile).

Furthermore, due to a change in ideology how how GPU Boost works and who the Titan is designed for, the power target settings in the control panel no longer defaults to the typical board power (which was 170W for the GTX 680), but it now sets the graphics card's maximum power draw (which is 250W at the 100% setting). 

Hardcore liquid cooling enthusiasts should be excited by this news, as GPU Boost 2.0 looks like it will have great synergy with the low operating temperatures of a liquid cooled graphics card. That of course means getting your hands dirty to customize your own water block for this GPU, but the benefit for some enthusiasts can be immeasurable.

One more nugget of information is that GPU Boost 2.0 now comes with over-voltage control, allowing you to push forth for higher boost clocks bv means of increasing GPU voltage. The latter is in direct relation with the GPU's operating temperature, which is now in your direct control. Of course, to prevent permanent damage to the GPU, a voltage tweaking is limited to a safe range. However, over-voltage support by card vendors is optional and they can disable it in the video BIOS. As such, it remains to be seen which vendors will support this function.

For now, GPU Boost 2.0 will be exclusive to the GTX Titan.

 

Display Overclocking

VSync. Gamers have a long held love-hate relationship with it. On the one hand, it prevents graphics tearing sometimes seen when pushing high frame rates to a 60Hz refresh rate monitor, but on the other hand, it limits your game to a maximum of 60 FPS. GPU Boost 2.0 might solve that problem with a new feature called Display Overclocking. Essentially, this works by adjusting (overclocking) the pixel clock of your display, allowing you to hit higher refresh rates and thus higher FPS with VSync still enabled. Not all monitors currently support this feature and we haven't had the chance to test it ourselves yet, however, the idea is certainly intriguing. NVIDIA mentioned that it's basically a trial and error process, but if your monitor doesn't support it, you will see it default back to its original functionality.

Unlike GPU Boost 2.0, NVIDIA tells us that Display Overclocking will be available for older NVIDIA cards as well, however no time frame for the updated drivers was given.


 

Full Performance Review Coming Soon!

Will the GTX Titan live up to expectations? Find out in our full review coming on 10PM, 21st February 2013! Until then, an NDA has been enforced to keep all performance matters under wraps.

Edit: You can now read the full review here!

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.