News Categories

NVIDIA launches its new GPU Cloud platform for easy A.I. development

By Vijay Anand & John Law - on 11 May 2017, 7:54pm

NVIDIA launches its new GPU Cloud platform for easy A.I. development

How do you move all this framework to the cloud?

With NVIDIA's GPU Cloud, now you can!

Earlier today during NVIDIA’s GPU Technology Conference (GTC) 2017, Jen-Hsun Huang, CEO of NVIDIA, launched the NVIDIA GPU Cloud (NGC) platform.

According to Jen-Hsun, the new cloud-based platform will give developers convenient access to a comprehensive software suite that will give developers access to the company’s GPU resources in order to further their endeavor in the realm of A.I, as well as providing aid for institutions seeking to further development with their Deep Learning initiatives.

“We’re designing a cloud platform that will unleash AI developers, so they can build a smarter world,” said Jim McHugh, vice president and general manager at NVIDIA. “You can do your best work no matter where you are, using our latest technology in the cloud. It’s accelerated computing when and where you need it.”

The NVIDIA NGC is accessible via a developer’s PC (powered by either a TITAN X or GeForce GTX 1080 Ti), NVIDIA DGX system or via the cloud. Developers who use the NGC will gain the following benefits, as per NVIDIA’s description:

  • Purpose Built: Designed for deep learning on the world’s fastest GPUs.
  • Optimized and Integrated: The NGC Software Stack will provide a wide range of software, including: Caffe, Caffe2, CNTK, MXNet, TensorFlow, Theano and Torch frameworks, as well as the NVIDIA DIGITS GPU training system, the NVIDIA Deep Learning SDK (for example, cuDNN and NCCL), nvidia-docker, GPU drivers and NVIDIA CUDA for rapidly designing deep neural networks.
  • Convenient: With just one NVIDIA account, NGC users will have a simple application that guides people through deep learning workflow projects across all system types whether PC, DGX system or NGC.
  • Versatile: It’s built to run anywhere. Users can start with a single GPU on a PC and add more compute resources on demand with a DGX system or through the cloud. They can import data, set up the job configuration, select a framework and hit run. The output could then be loaded into TensorRT for inferencing.
  • Ease of use: Just select a computing node (either local, networked, cloud), select data set and select a container to run

It’s essentially helpful when moving from one system to another as it’s easier to pull the load from a cloud than trying to lug the data everywhere.

NVIDIA's new endeavor is expected to debut in July 2017 as a beta service.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.