Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
News
News Categories

NVIDIA adds A100 PCIe GPUs to boost AI, data science and HPC server offerings

By Vijay Anand - on 22 Jun 2020, 3:01pm

NVIDIA adds A100 PCIe GPUs to boost AI, data science and HPC server offerings

Following NVIDIA’s powerful Ampere architecture launch through their A100 Tensor Core GPU that’s expressly designed to tackle data center woes, NVIDIA is now bringing the GPU in a plug-in PCIe form factor dubbed the A100 PCIe.

Previously, the only way vendors could deploy servers with the A100 GPU is either via NVIDIA’s own DGX A100 supercomputer (that uses eight A100 GPUs in SXM form factor) or their HGX A100 reference server platform deployed by their server/HPC channel partners (via quad or eight SMX GPU baseboards).

With another form factor option via the NVIDIA A100 PCIe, many more server options can be made possible to accelerate AI, data science and scientific computing workloads at a reduced price point. After all, the A100 GPU has several breakthroughs to make it many times more powerful than the Volta-based GPUs before it, these new servers could just be equipped with a single A100 PCIe or up to four of them in a server platform using existing motherboard solutions and not requiring an HGX platform.

For those curious, the NVIDIA A100 PCIe and A100 SXM, are both identical in capabilities, except that the PCIe version can only scale to work with one other A100 PCIe via an NVLink bridge. So while the SMX variant on either the HGX server platform or through the DGX A100 supercomputer can scale up to four or eight A100 GPUs working together by using NVSwitch, the PCIe version can only work with two A100 PCIe GPUs together. Also, due to the physical form factor difference, its rated power consumption or TDP is only 250 watts whereas the SMX version has a TDP of 400 watts per GPU.

Starting today, several server manufacturers such as ASUS, Cisco, Dell, Fujitsu, Gigabyte, Lenovo, One Stop Systems, Supermicro and many more are gearing up to offer single A100 PCIe GPU system, all the way up to eight or more GPUs through the previously announced HGX A100 reference platforms to offer their customers the right solution and scaling for their needs. As the A100 PCIe belongs to the data center portfolio of products, it will only be offered via NVIDIA's server and HPC channel partners to offer new systems to qualify for their customers' needs and not as an upgrade option for existing servers.

(Off-topic note: So does this mean the yet-to-be-announced GeForce RTX 3090 has a TDP of 250W? We’ll know soon.)

Read Next (1): NVIDIA's new Ampere architecture will soon power cars!
Read Next (2): Meet NVIDIA's new 54-billion transistor Ampere GPU, the A100