Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
News
News Categories

NVIDIA wants to make it easier for companies to put its GPUs in ARM servers

By Koh Wanzi - on 20 Nov 2019, 6:05pm

NVIDIA wants to make it easier for companies to put its GPUs in ARM servers

Image Source: NVIDIA

NVIDIA has announced a reference design platform that will help companies quickly build ARM-based servers accelerated by NVIDIA GPUs. CEO Jensen Huang announced the new platform, comprising of hardware and software building blocks, at the SC19 supercomputing conference in Denver. The platform is supposed to speed up the deployment of NVIDIA GPUs in ARM servers focused on things like AI and exascale supercomputing.

According to Huang, this is intended to address the growing demand in the high performance computing community to take advantage of a broader range of CPU architectures. It will allow supercomputing centres, hyperscale cloud operators, and enterprises to leverage both NVIDIA's accelerated computing platform and ARM-based servers.

NVIDIA worked with ARM and its ecosystem partners to build the reference platform, thus ensuring that NVIDIA GPUs can function effectively alongside ARM chips. On top of that, several HPC software companies have also used NVIDIA CUDA-X libraries to build GPU-enabled management and monitoring tools that can run on ARM servers.

"Breakthroughs in machine learning and AI are redefining scientific methods and enabling exciting opportunities for new architectures. Bringing NVIDIA GPUs to ARM opens the floodgates for innovators to create systems for growing new applications from hyperscale cloud to exascale supercomputing," said Huang.

Separately, NVIDIA also said that it is working closely with developers to bring GPU acceleration to ARM for HPC applications like GROMACS, LAMMPS, MILC, NAMD, Quantum Espresso, and Relion. This is on top of making its own software compatible with ARM, and the company says it has already compiled extensive code with its partners to support GPU acceleration for these applications on the ARM platform.

In another nod to the AI and HPC space, NVIDIA also announced Magnum IO, a suite of software to help data scientists and HPC researchers process massive amounts of data. This is a lengthy process that can normally take hours, but Magnum IO hopes to cut this down to mere minutes.

Image Source: NVIDIA

According to NVIDIA, Magnum IO is optimised to eliminate storage and I/O bottlenecks. It can reportedly deliver up to 20x faster data processing for multi-server and multi-GPU compute nodes, all while working with huge data sets to carry out complex workloads like financial analysis and climate modelling.

Finally, NVIDIA revealed a scalable GPU-accelerated supercomputer that is now available in the cloud on Microsoft Azure. For the first time, customers can now rent an entire AI supercomputer on demand from their desk, effectively matching the capabilities of large-scale supercomputers that can take months to deploy.

"Until now, access to supercomputers for AI and high performance computing has been reserved for the world's largest businesses and organisations," said Ian Buck, Vice President and General Manager of Accelerated Computing at NVIDIA. "Microsoft Azure's new offering democratises AI, giving wide access to an essential tool needed to solve some of the world's biggest challenges."