Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
News Categories

NVIDIA's top ranging 21-billion transistor Tesla V100 GPU gets even better

By Vijay Anand - on 28 Mar 2018, 1:05am

NVIDIA's top ranging 21-billion transistor Tesla V100 GPU gets even better

Unveiled about a year back at GTC 2017, NVIDIA’s crazy 5,120 CUDA-core Volta architecture (GV100) GPU with 21 billion transistors, the Tesla V100 was finally available in production systems since late last year. Less than a quarter after its debut, the 16GB HBM2 equipped Tesla V100 is now updated to sport denser memory chips, thus doubling the memory size.

The new Tesla V100 32GB GPU still uses the same fast high-efficiency HBM2 memory to deliver up to 900GB/sec peak memory bandwidth, like its 16GB predecessor. The 32GB edition will be applicable to both the NVLink and PCIe connectivity variants.

Why does the increased onboard memory matter?

This is great for training larger and more complex data sets, tackle large FFTs, take the deep neural network (DNN) training to even lower error rates, process larger and higher resolution content and much more. Initial tests show up to 1.5x uplift in performance when given the ideal data set to crunch.

NVIDIA hasn’t pegged a price for these expensive Tesla V100 GPUs as they are largely sold only by big system vendors in the data center business. A quick Google check revealed the existing PCIe version of the Tesla V100 costs a staggering US$8,500 and we believe the newer 32GB editions will cost more.

A close-up of an NVIDIA Tesla V100 NVLink GPU module.

Read Next: Highlights of the Tesla V100 GPU and Volta GPU architecture