Graphics Cards Guide
- ‹ Prev
- Next ›
NVIDIA GeForce GTX Titan - The True King of Kepler
A Brief History of the GK110 GPU
Last year, in March 2012, NVIDIA unleashed its 28nm Kepler-based architecture on the world with the release of its flagship Kepler GK104 GPU-powered NVIDIA GeForce GTX 680. And while it was, without a doubt, one of the most powerful GPUs we had seen to date, not everyone was satisfied. You see, while GK104 was good, it wasn't GK110.
Enthusiasts who had been following the progress of Kepler pointed out that GK104's numeral code was an indication of a GPU in the middle of the range, not a flagship product. Speculation was rife that an even more powerful GPU existed, a yet to be revealed GK100/GK110 GPU that would hold the true power of Kepler. Rumors of its specifications and capabilities were whispered, and a synopsis for NVIDIA's (then) upcoming Graphics Technology Conference 2012 seemed to suggest that a new GPU with a staggering "7.1 billion transistors" was on the way.
In May that year, at GTC 2012, NVIDIA did in fact unveil the rumored GK110 GPU. But while details were sparse at the time, one thing was clear: GK110 was destined for the Tesla K20, an enterprise-class card costing in the range of US$3000. And with that, the dream of a GK110 GeForce card seemed over.
Meet the NVIDIA GeForce GTX Titan
The GTX Titan takes its name from the Oak Ridge National Laboratory TITAN supercomputer located in Tennessee, USA, which in November 2012 was named the world's fastest supercomputer. Like the 18,668 Tesla K20X GPU accelerators found at the heart of the TITAN supercomputer, each GTX Titan is powered by NVIDIA's GK110 GPU, which comprises of 2688 CUDA Cores and 7.1 billion transistors, making it, by far, the most powerful GPU ever created.
To put that in perspective, that's 75% more CUDA Cores than the the GTX 680, and twice the number of transistors. In fact, the GTX Titan has slightly more transistors than even the dual-GPU GeForce GTX 690!
Core clock speeds on the Titan are set to 836MHz with boost core clock speeds reaching 876MHz using NVIDIA's new GPU Boost 2.0 technology (more on that below).
For those that complained about the GTX 680's lack of memory, the Titan also boasts 50% more memory bandwidth than the GTX 680 with a 384-bit memory interface, and a whopping 6GB VRAM, clocked at 6008MHz DDR (GDDR5). The massive graphics memory would come in handy in the most intense of situations such as surround gaming with 3D enabled on the highest possible game quality settings and when operating in SLI.
The reference card itself measures 266mm in length, which puts it right between the shorter 255mm GTX 680 and the longer 280mm GTX 690. Like the GTX 680, it will feature two DVI ports, one HDMI port and one DisplayPort output.
The GTX Titan will retail in the US for US$999, which is the same launch price as the dual-GPU GTX 690. NVIDIA says that both will share the top spot in its lineup. Local pricing has yet to be released, however, locally the GTX 690 ranges from about S$1480 to S$1580 so you can expect the Titan to cost about the same. With that kind of pricing and the expected performance throughput that should rival the GTX 690, but with new features that we would be highlighting below, we doubt the GTX 690 would still appeal to enthusiasts. We will of course ascertain its true caliber in our performance review coming up.
Here's a look at how the Titan stacks up against its competition:
Under the Hood
Reference cards aren't generally known for high-performance coolers or low acoustic levels, however, due to the complexity of the GK110 GPU, NVIDIA will not be allowing its add-in partners to customize or alter the reference card in any way, much like it did with the dual-GPU NVIDIA GeForce GTX 690.
Fortunately, NVIDIA has spared no expense in the thermal design of the Titan which primarily uses a high performance copper vapor chamber to provide its cooling. The vapor chamber draws heat away from the GPU using an evaporation process similar to how a heatpipe performs, but according to NVIDIA, in a more effective manner.
The heat from the vapor chamber is then dissipated by a large, dual-slot aluminum heatsink and an extended fin stack which extends beyond the base of the vapor chamber to increase the total cooling area. NVIDIA is also using a new thermal interface material from a company called Shin-Etsu that supposedly boasts over two times the performance of the thermal grease used on the GTX 680. Finally, the Titan also has an aluminum back plate to provide additional cooling for the PCB and a single fan to exhaust hot air.
Like the GTX 680, acoustic dampening material is used in the fan to minimize noise.
All of this is important because the GTX Titan will be the first card to use NVIDIA's new and improved GPU Boost 2.0 technology which, instead of using existing power overhead to dynamically adjust clock speeds like the original GPU Boost, now uses thermal thresholds.
GPU Boost 2.0
Originally launched last year with the GeForce GTX 680, NVIDIA's GPU Boost technology was designed to dynamically adjust core clock speeds by using excess available power budget to boost performance. The Titan's GPU Boost 2.0 works in a similar fashion, but uses a GPU temperature threshold instead. It will still consider the power drawn by the graphics card, but it becomes a secondary checkpoint and is no longer the main factor.
Similar to the original GPU Boost, GPU Boost 2.0 will automatically boost the Titan's core clock speed as long as it remains below a certain temperature - by default it is set to 80 degrees Celsius (but it can be raised higher yet). The Titan's GPU will constantly monitor temperature, adjusting core clock speed and voltage on-the-fly to maintain this target temperature.
The idea for this change is backed by the fact that the GPU can in fact operate safely at higher power draw as long as it's kept within its temperature threshold. Of course, there are upper limits to both and as long as they don't exceed the safe range, consumers stand to gain more.
Unlike the original GPU Boost, NVIDIA will now allow consumers to tweak GPU Boost behavior. As such, gamers that are comfortable with higher temperatures will be able to raise the GPU temperature threshold for higher clock speeds. Of course, higher temperatures also mean increased fan acoustics. Since fan noise is a big deal for many enthusiasts, NVIDIA's upcoming drivers for the Titan will also also allow you to control your preferred temperature target by meddling with fan noise levels (which govern the operating temperature, which in-turn limits your performance boost profile).
Furthermore, due to a change in ideology to how GPU Boost works and who the Titan is designed for, the power target settings in the control panel no longer defaults to the typical board power (which was 170W for the GTX 680), but it now sets the graphics card's maximum power draw (which is 250W at the 100% setting).
Hardcore liquid cooling enthusiasts should be excited by this news, as GPU Boost 2.0 looks like it will have great synergy with the low operating temperatures of a liquid cooled graphics card. That of course means getting your hands dirty to customize your own water block for this GPU, but the benefit for some enthusiasts can be immeasurable.
One more nugget of information is that GPU Boost 2.0 now comes with over-voltage control, allowing you to push forth for higher boost clocks bv means of increasing GPU voltage. The latter is in direct relation with the GPU's operating temperature, which is now in your direct control. Of course, to prevent permanent damage to the GPU, a voltage tweaking is limited to a safe range. However, over-voltage support by card vendors is optional and they can disable it in the video BIOS. As such, it remains to be seen which vendors will support this function.
For now, GPU Boost 2.0 will be exclusive to the GTX Titan.
VSync. Gamers have a long held love-hate relationship with it. On the one hand, it prevents graphics tearing sometimes seen when pushing high frame rates to a 60Hz refresh rate monitor, but on the other hand, it limits your game to a maximum of 60 FPS. GPU Boost 2.0 might solve that problem with a new feature called Display Overclocking. Essentially, this works by adjusting (overclocking) the pixel clock of your display, allowing you to hit higher refresh rates and thus higher FPS with VSync still enabled. Not all monitors currently support this feature and we haven't had the chance to test it ourselves yet, however, the idea is certainly intriguing. NVIDIA mentioned that it's basically a trial and error process, but if your monitor doesn't support it, you will see it default back to its original functionality.
Unlike GPU Boost 2.0, NVIDIA tells us that Display Overclocking will be available for older NVIDIA cards as well, however no time frame for the updated drivers was given.
- ‹ Prev
- Next ›