Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
News
News Categories

At SIGGRAPH 2017, NVIDIA announces key developments that bring AI to GPU computing

By Wong Chung Wee - on 1 Aug 2017, 6:23am

At SIGGRAPH 2017, NVIDIA announces key developments that bring AI to GPU computing

(Image source: NVIDIA)

At this year’s SIGGRAPH, NVIDIA aims to bring artificial intelligence to graphics computing. At the same time, it plans to establish its presence at the forefront of VR development. NVIDIA has identified four major tech trends, and they are content creation, VR adoption, artificial intelligence that is incorporated in applications and services, and the demand for thinner computing devices. NVIDIA’s GPU offerings, in terms of its hardware and software framework ecosystem, is posed to ride these trends.

AI for Graphics

(Image source: NVIDIA)

At SIGGRAPH 2017, NVIDIA will showcase its AI for Graphics by leveraging on its Volta GPU as well as CUDA-based cuDNN library of mathematical routines for deep neural networks. The current version of NVIDIA CUDA Deep Neural Networks (cuDNN), i.e., version 7 is optimized for Volta GPU. One of its applications is AI Facial Animation, which uses AI to enhance the process of facial animation. One of the main challenges in facial animation is to match speech and emotions to lips and facial movements. However, by using AI Facial Animation, a trained DNN can render the speaking style of a synthetic 3D character, using different parameters, like gender, accent, and even languages!

(Image source: NVIDIA)

Another compute-intensive task that will stand to gain from AI is ray tracing. As it involves tracing the possible light paths from the pixels of an image, and their possible interactions with objects that co-exist with the image, a lot of computing firepower is required. NVIDIA's solution to this quandary, AI for Denoising is touted to generate photo-realistic images with trained neural networks. According to the company, their trained neural network has been exposed to “10,000s of image pairs”. Each image pair consists of one image is “the one path per pixel, noise image”, and the other ‘reference image’ uses 40,000 paths per pixel. NVIDIA’s trained network can map different types of image noise to the corresponding, denoised pixel.

Speaking of ray tracing, NVIDIA will also announce OptiX SDK 5, which is optimized for AI-enabled systems. As a matter of fact, the new ray tracing framework will be showcased on the NVIDIA DGX station. Compared to a Pascal P100-based system operating on OptiX 4.1 SDK, the NVIDIA DGX station, running the new OptiX SDK 5, is 39 times faster, according to NVIDIA’s claims.

VR adoption

(Image source: NVIDIA)

On the VR front, NVIDIA is proud to showcase the amalgamation of Isaac and Project Holodeck, two initiatives that were first announced NVIDIA GTC 2017. Isaac is the company’s first robot simulator, while Project Holodeck is “a photorealistic, collaborative virtual reality environment that incorporates the feeling of real-world presence through sight, sound and haptics.” This VR environment “allows creators to import high-fidelity, full-resolution models into VR to collaborate and share with colleagues or friends — and make design decisions easier and faster.” By training the robot simulator in Project Holodeck’s mediated environment, Isaac’s learning curves, for complicated and possibly dangerous tasks, are shortened, and its trained networks can be transplanted to an actual robot to carry out those tasks. Watch Isaac in demonstration as he plays a game of dominoes at SIGGRAPH 2017!

Digital content creation

(Image source: NVIDIA)

In the digital creation market segment, NVIDIA estimates there are over 25 million users, who are deemed as the “world’s most innovative users” of graphics computing. To address their needs for thin computing devices, and the need for intensive graphics compute capabilities, NVIDIA is pleased to announce the Quadro External Graphics program. Under this program, the company will work with partners to produce and sell certified Quadro eGPU hardware solutions. A certified Quadro eGPU system will consist of an external GPU enclosure that connects, via Thunderbolt 3 interface, to a Windows OS device with a supported Thunderbolt 3 port. NVIDIA allows the flexibility of partners to offer either the Quadro eGPU enclosure only, or a complete system, comprise a supported NVIDIA Quadro card, and the Quadro eGPU enclosure. The NVIDIA Quadro eGPU solution is slated to launch by mid-August or early September 2017. For now, the supported NVIDIA Quadro cards are the GP100, P6000, P5000, and P4000. Does this solution remind you of the ASUS ROG XG Station 2?

(Image source: NVIDIA)

According to the NVIDIA spokesperson, a NVIDIA Quadro eGPU enclosure will cost about US$350 to US$500, without any installed Quadro graphics card.

(Image source: NVIDIA)

Finally, NVIDIA announce the launch of new drivers for its Pascal-based Titan Xp and Titan X GPUs. For the Titan Xp, the drivers will bring about performance increments for professional creative software suites like Autodesk Maya for 3D modelling and Sony Vegas Pro for professional video editing. SIGGRAPH 2017 will be held in Los Angeles, United States from 30 July to 03 August 2017.