Event Coverage

NVIDIA's GPU Computing Ecosystem - The Tesla Aspect

By Wong Chung Wee - 18 May 2012

NVIDIA's GPU Computing Ecosystem - The Tesla Aspect

2-Tier Market Strategy for Tesla

We had the chance to speak to Dr. Sumit Gupta, Senior Director for Tesla GPU Computing at one of the breakout sessions at GTC 2012. As we have since reported earlier in our news report, there is a clear distinction between the Tesla K10 and yet-to-be-launched K20. This is primarily due to the company's strategy to target the different markets and industries for high-performance computing (HPC).

The Tesla roadmap progression.

Dr Sumit reiterated the company's product differentiation for the updated Tesla family is important as the audience for HPC matures over time and their needs become more refined.

Specifications of Tesla K10 and K20

Model Tesla K10 Tesla K20
Core Code 2 x GK104 GK110
Transistor Count 2 x 3540 million  7100 million
Manufacturing Process 28nm 28nm
Core Clock 745MHz N.A.
Stream Processors

2 x 1536 Stream processing units

N.A.
Stream Processor Clock 745MHz N.A.
Texture Mapping Units (TMU) or Texture Filtering (TF) units 2 x 128 N.A.
Raster Operator units (ROP) 2 x 32 N.A.
Memory Clock 2500MHz GDDR5 N.A.
DDR Memory Bus 2 x 256-bit 384-bit
Memory Bandwidth 2 x 160GB/s N.A.
PCI Express Interface PCIe ver 3.0 x16 PCIe ver 3.0 x16
Molex Power Connectors

1 x 6-pin, 1 x 8-pin (TDP: 225W)

N.A.
Multi GPU Technology SLI SLI

The Tesla K10 block diagram and the dual GK104 are connected by a PCIe Express switch, just like the consumer part GeForce GTX 690.

The Tesla K10 sports two GK104 GPUs and the GK104 is also featured on the GeForce GTX 680 and GTX 690 graphics cards. So essentially, the Tesla K10 is pretty similar to the GeForce GTX 690 for the consumers. This graphics accelerator has already found its use in oil and gas industries as well as the defense industries for signal and image processing. These tasks are less demanding in terms of computing requirements.

Dr.Gupta said that NVIDIA is able to extend HPC to any interested party who require such computing capabilities by making its graphics cards and accelerators readily available, Coupled with its CUDA framework that make it easier to develop software solutions to integrate these hardware into the GPU computing ecosystem, he said that the major impediment to the acceptance of GPU-accelerated HPC is inertia.

He admitted the technical difficulties in coding software for GPU-accelerated applications and services; hence, existing systems whose applications that have not been ported to leverage on GPU-acceleration may never adopt its benefits due to resistance from people who have vested interests in these systems. He hoped that such parties will overcome their inertia and NVIDIA has in place its Dev Tech team. This NVIDIA outfit will assist to port these systems so that they are CUDA-enabled and ready to tap into the parallel processing capabilities of the GPUs.

Out of curiosity, we asked if GPU Boost was enabled from these new Tesla GPUs to which he replied that it had to be disabled as it had introduced GPU frequency jitter. Dr Gupta said that in the near future, the adoption of GPU-accelerated software services by the manufacturing industries will be greater. One of the advantages is the ability to simulate drop-test results and shorten the entire manufacturing workflow significantly, allowing the manufacturers to make their products available to the retail market in a shorter time. He even disclosed that the Motorola Atrix phone was one of the beneficiaries of this advantage. It appears the reach of NVIDIA's GPU computing ecosystem will be far-reaching as inertia is subsequently overcome as its advantages become more evident.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.