Feature Articles

How CUDA-X AI and supporting hardware will power data science, the new pillar of discovery!

By Vijay Anand - 21 Mar 2019

How CUDA-X AI and supporting hardware will power data science, the new pillar of discovery!

What’s CUDA-X about?

The CUDA programming model has been around since 2006 to tap NVIDIA’s GPU for general purpose parallel computing needs. As such, there are numerous libraries and sub-routines that have been developed since its existence to accelerate domains such as computational fluid dynamics, financial analysis, bioinformatics, computational chemistry, medical imaging, weather and climate prediction, computer vision and much more. As such, CUDA is often referenced as a means of programming the GPU as well as its accelerated computing libraries.

As NVIDIA’s portfolio grows and its applicability of CUDA in more domains, the company has re-positioned/re-branded how it attempts to address different aspects of CUDA. Most importantly, NVIDIA CUDA-X will now represent all their GPU-accelerated computing libraries – be it for RTX, HPC, AI, Drive, Isacc, Clara and Metropolis. Over 40 libraries are available under the CUDA-X umbrella and 15 of them are focussed on AI.

To simplify the understanding of this diagram, the bottom-most layer is the hardware class (RTX, DGX, HGX and AGX), the next layer is the common programming model that address all the hardware – CUDA. Following that is the various GPU-accelerated libraries using CUDA to address various domains of use. Lastly the top layer is NVIDIA’s GPU Cloud (NGC), which it can tap for further cloud-based deployment, processing power and much more.

Of particular focus at GTC 2019 is NVIDIA’s CUDA-X AI, which is the company’s new and growing focus to address the acceleration of AI and data science. We’re at the tipping point where many everyday businesses are increasingly turning to AI to make sense of all the data that's generated in the course of the business. This requires deep learning, machine learning and data analytics among other interpretations. A typical workflow in implementing AI smarts is data processing, feature determination, training, verification and deployment.

With the launch of the CUDA-X AI SDK for GPU accelerated data science needs, CUDA-X AI can unlock the potential of NVIDIA’s Tensor Core GPUs to address this end-to-end AI pipeline outlined above. In terms of availability and readiness of use, here’s a word from NVIDIA:-

CUDA-X AI is widely available. Its software-acceleration libraries are integrated into all deep learning frameworks, including TensorFlow, PyTorch, and MXNet, and popular data science software such as RAPIDS. They’re part of leading cloud platforms, including AWS, Microsoft Azure, and Google Cloud. And they’re free as individual downloads or containerized software stacks from NGC. CUDA-X AI libraries can be deployed everywhere on NVIDIA GPUs, including desktops, workstations, servers, cloud computing, and internet of things (IoT) devices.

More recently, Microsoft’s Azure Machine Learning (AML) service is the first major cloud platform to integrate RAPIDS, a key component of CUDA-X AI. This allows data scientists to accelerate their machine learning projects by up to 20x speed-up using NVIDIA’s CUDA-X AI over Microsoft Azure cloud. According to NVIDIA, this is the first time RAPIDS has been integrated natively into a cloud data science platform.

 

Addressing data science needs - the 4th pillar in cloud computing?

Traditionally, data centers have two types of computers. The first are supercomputers that are designed for a singular task/function/focus whose sole purpose is to run it as fast as possible to generate the required results. The second is hyper-scale computers that are designed to serve a large number of users for various small tasks in the most cost-effective approach.

Data science is a new HPC workload that is only made possible because of the large amount of data collected in the current state of affairs, modern algorithms and machine learning standards and the availability of strong computation horsepower. For problems and equations that are too large, complicated and unstructured, the only solution is a data-driven approach to analyze and extract probable outcomes, which is what data science is all about.

Therefore, NVIDIA believes that Data Science will become the fourth pillar of cloud computing, after compute, network and storage – all of which are currently accomplished with a combination of NVIDIA’s and partner hardware.

 

New data science servers!

To that extent, NVIDIA has now announced NVIDIA-powered enterprise servers optimized for data science are now available from seven major enterprise hardware partners, including Cisco, Dell EMC, Fujitsu, HPE, Inspur, Lenovo and Sugon. Featuring NVIDIA Tesla T4 GPUs that are tuned to run NVIDIA CUDA-X AI acceleration libraries, the servers are targeted for businesses that require a highly efficient platform to crunch data science problems along with a host of other enterprise workloads such as accelerate AI training and inference, machine learning, data analytics and virtual desktops. All while enjoying the support network and high service levels that these big-time vendors offer to their customers.

Now, with a wave of mainstream NVIDIA-powered servers optimized for data science, companies worldwide can deploy accelerated AI at a faster pace across their entire business. - Ian Buck, VP and GM of Accelerated Computing at NVIDIA.

Additionally, these mainstream Tesla T4 servers announced at the GTC 2019 event (listed below) have been NVIDIA NGC-Ready validated, a designation reserved for servers with demonstrated ability to excel in a full range of accelerated workloads.

All software tested as part of the NGC-Ready validation process is available from NVIDIA NGC, a comprehensive repository of GPU-accelerated software, pre-trained AI models, model training for data analytics, machine learning, deep learning and high-performance computing accelerated by CUDA-X AI. In fact, NVIDIA expanded the NGC software hub with updated tools and pre-trained models to assist data scientists to churn out optimized solutions faster.

On a related note, NVIDIA Tesla T4 GPUs will be coming to Amazon Web Services too on their new EC2 G4 instances. Paired with NVIDIA CUDA-X AI acceleration software on AWS marketplace makes it ideal for deploying cost-efficient machine learning, deep learning and graphics processing.

 

New workstations targeted for data scientists

NVIDIA is also introducing a new breed of personal high-performance workstations for data scientists around the world by teaming up with the leading OEM and system builders. Purpose-built for data analytics, machine learning, and deep learning, the systems provide the extreme computational power and tools required to prepare, process and analyze the massive amounts of data used in fields such as finance, insurance, retail, and professional services.

Powered by dual, high-end Quadro RTX 8000 and Quadro RTX 6000 GPUs to deliver up to 260 teraflops of compute performance and a massive 96GB of memory using NVIDIA NVLink interconnect technology, these Quadro RTX-powered data science workstations provide the capacity and bandwidth to handle the largest datasets and compute-intensive workloads as well as the graphics power required for 3D visualization of massive datasets, including VR.

Other features of these data science workstations include NVIDIA’s CUDA-X AI accelerated data science software like RAPIDS, TensorFlow, PyTorch and Caffe, is enterprise-ready, and has optional software support for NVIDIA-developed software, containers, deep learning and machine learning frameworks.

NVIDIA-powered systems for data scientists are available immediately from global workstation providers such as Dell, HP and Lenovo and regional system builders, including AMAX, APY, Azken Muga, BOXX, CADNetwork, Carri, Colfax, Delta, EXXACT, Microway, Scan, Sysgen and Thinkmate.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.