NVIDIA’s Omniverse virtual world creator and collaboration tool gets upgrades and Omniverse Avatars
NVIDIA’s Omniverse world creator and collaboration tool gets new capabilities and Omniverse Avatars
If Facebook’s Metaverse sounds like a fairytale that’s destined not to be seen till the next decade, we completely understand as there’s so much more work to be done to make virtual worlds an everyday reality. Beyond the glasses and headgear you might have to don and people needing their own VR rooms to allow some level of movement and tracking, there’s also the whole digital world-building headache that needs to be solved and it really won’t take off unless the virtual you is lifelike to other virtual participants and vice-versa.
While Facebook figures out those pieces, NVIDIA has already launched their open 3D VR/AR collaboration platform that’s both for platform developmental needs as well as a stage to create applications that exist in this virtual sphere.
Born out of a need for real-time 3D design and collaboration where multiple collaborators can work on the same project anywhere around the world at the same time using industry-standard creator applications, it is now paving its way towards an all-purpose simulation engine for creating and connecting virtual worlds. This is NVIDIA’s Omniverse.
Changing gears and bringing Omniverse out of beta
Launched as a closed beta to selected partners in NVIDIA GTC 2019, they’ve since made it an open beta starting in October 2020. And in today’s November 2021 GTC event, NVIDIA has taken this up a notch by announcing Omniverse Enterprise, which marks the general availability of this multi-GPU scalable 3D design collaboration platform. The new NVIDIA Omniverse Enterprise differs from the previously available Omniverse Open Beta as it includes the following key components:-
- An Omniverse Nucleus server, which manages Universal Scene Description (USD)-based collaboration (a standard for exchanging information about modelling, shading, animation, lighting, visual effects and rendering across multiple applications)
- Omniverse Connectors, which are plug-ins to industry-leading design applications
- End-user applications called Omniverse Create and Omniverse View.
Omniverse Enterprise provides flexible deployment, from small workgroups to globally distributed teams through a subscription model that includes full NVIDIA Enterprise Support services – that includes support, upgrades, and maintenance with direct communication with technical experts to minimize system downtime and maximize system/user uptime.
The NVIDIA Omniverse Enterprise consists of three subscription components – Creator, Reviewer and Nucleus. NVIDIA Omniverse Enterprise starts from US$9,000 as the minimum initial purchase pack and comes with 2x Omniverse Enterprise Creator licenses, 10x Omniverse Enterprise Reviewer licenses and 4x Omniverse Enterprise Nucleus subscriptions.
Omniverse Enterprise is immediately available from NVIDIA’s partner networks, including select global system partners like Boxx, Dell, Lenovo, PNY, Supermicro and Z by HP. Yes, the Omniverse Enterprise stack doesn’t include the actual RTX-powered laptops, workstations or the NVIDIA-certified enterprise systems to run NVIDIA Omniverse, thus you’ll still need to get relevant hardware and that’s where these partner vendors come in.
For customers interested in a trial program, Omniverse Enterprise is available in two forms to new users at no charge for up to 30 days. NVIDIA Omniverse Open Beta platform for individuals is available for free download at nvidia.com/omniverse and allows collaboration between apps and one other person.
New Omniverse Features
To build these new virtual shared worlds effectively, it will need connectors and extensions to existing software used by industry players and this catalogue has been growing ever since NVIDIA Omniverse launched. Six new connectors have come up since the March 2021 GTC such as Replica that assists in the realm of AI voice, Radical for AI pose estimation and Lightmap comes in handy for setting up an HDR light studio.
As NVIDIA’s Omniverse evolves itself as a virtual world creator and connecting them, Omniverse is being positioned for different users for extended reality delivery by offering different subsets of the platform. For example, new today is Showroom, an Omniverse app for demos and samples that showcase core Omniverse technologies like graphics, materials, physics and AI that’s accessible by millions of GeForce RTX users to enjoy and learn about Omniverse’s capabilities; here’s a snippet-
Omniverse Farm is a system layer for accessing multi-GPU rendering and simulation needs across multiple systems, while Omniverse AR streams AR to phone or AR glasses. Lastly and coming soon is Omniverse VR that will deliver the world’s first full-frame interactive ray-traced VR.
Building Digital Twins
And precisely because it’s a great simulation engine for creating and connecting virtual worlds, NVIDIA’s Omniverse is also an ideal tool to not only enable 3D design collaboration but also to create Digital Twins across various industries. Digital Twins are great problem solvers to manipulate variables digitally and observe the pros/cons of doing them using a copy of the existing real world.
For example, Ericsson is testing 5G signal propagation for ideal 5G tower placement through the world’s first city-scale Digital Twins simulation of what’s in the real world, right down to accurate material properties to determine real-world beamforming and 5G signal strength/propagation. Here’s a quick look at how Ericsson is using accurate modelling to interpret how 5G radio waves will propagate:-
Elsewhere, BMW is improving workflows and manufacturing efficiencies at three of their new factories by employing NVIDIA Omniverse to model digital twins of them which then allows BMW to collaborate and simulate from the entire factory plan level, down to precise engineering details of an operation/job.
Even more examples include creating a digital twin of Earth for weather forecasting, fighting wildfires with AI to help predict where they would occur and how to best respond to them among others. But what if you needed to do more than just have a digital twin for manipulation and you wanted to generate massive amounts of synthetic data to train AI networks on this digital twin?
NVIDIA anticipated such a need and debuted the new Omniverse Replicator which does exactly that. The first implementation of this engine is in NVIDIA’s Drive Sim and NVIDIA Isaac Sim – a virtual world for hosting the digital twin of autonomous vehicles and the manipulation of robots respectively. These replicators help fill real-world data gaps, label ground truths in ways humans may not have tried through data generated in virtual worlds across a range of diverse scenarios that may not be experienced safely in the real world. This allows autonomous Vehicles and robots built using this newfound data to master their skills more effectively before applying them in the physical world.
NVIDIA also announced Modulus, their framework for developing physics-based machine learning models for creating Digital Twins that are governed primarily by laws of physics and simulating them effectively. You can read more about that here.
Last, but not least, NVIDIA has revealed plans to build the world's most powerful AI supercomputer dedicated to climate change prediction. Christened as Earth-2, or E-2, the system focuses on its purpose through the creation of a digital twin of Earth in Omniverse. E-2 would be the climate change counterpart to Cambridge-1, an AI supercomputer dedicated to healthcare research and the most powerful of its kind.
The need for Omniverse Avatars
If NVIDIA’s Omniverse is all about creating virtual worlds, to make it even more approachable and bring true-to-life everyday work and play interactions to the virtual world, you’ll need accurate representations of yourself and other participants. More than looking the part, it will also need to capture your every nuance mimicking your expressions and sound like you too.
Based on NVIDIA Maxine, a GPU-accelerated SDK with AI features for developers to build virtual collaboration and content creation applications like video conferencing and live streaming, which we first featured here last year, it is modular in nature that can be chained up to deliver the latest features, performance and capabilities. For example, it was billed to offer conversational AI avatars and animated avatars with realistic animation and that’s exactly what Omniverse Avatar now serves up for creating interactive AI avatars, full autonomous virtual robots and assistants as well as teleoperated robots with humans in the loop.
This will also be infinitely useful for video conferencing, as well as creating your digital self in the virtual world. Maxine can do this thanks to the newly launched NVIDIA Riva SDK (not to be confused with the outdated Riva TNT graphics chips) to build conversational AI applications to deliver a world-class speech recognition pipeline and even text to speech capabilities in real-time. This is infinitely useful to break down barriers while collaborating with audiences across different language backgrounds to deliver accurate messaging and be understood immediately. Watch this demo of NVIDIA Omniverse Avatar in action using Project Maxine and NVIDIA RIVA:-
Impressive demo, isn’t it? NVIDIA Maxine also puts into play several AI effects to the interactive AI avatar such as face alignment in correspondence to where you’re seated, gaze correction to simulate eye contacts, upscaling for higher resolution, noise removal to remove unnecessary background chatter and ambience, face relighting, and many more.
Moving another step ahead to create autonomous virtual robots or assistants, NVIDIA showcased “Project Tokkio” to bring about AI-enabled customer service avatars that interact in real-time. In the first example, Jensen Huang, founder and CEO of NVIDIA, showed how colleagues engaged in a real-time conversation with an avatar crafted as a toy replica of himself and conversing on topics such as biology and climate science. In a second example, a customer service avatar was able to see and converse with two customers ordering food all while holding an intelligent exchange.
Omniverse Avatar uses elements from speech AI, computer vision, natural language understanding, recommendation engines, facial animation, and graphics delivered through the following technologies:
- Speech recognition across multiple languages and human-like responses using text to speech capabilities were courtesy of NVIDIA Riva SDK.
- Natural language understanding is based on the Megatron 530B large language model that can understand and generate human language. A pre-trained model can complete sentences and answer questions across a large domain of subjects, summarise complex stories, translate to other languages and more.
- The recommendation engine is provided by NVIDIA Merlin that can handle large amounts of data to make smarter suggestions.
- Perception capabilities are enabled by NVIDIA Metropolis, a computer vision framework for video analytics.
- Avatar animation is powered by NVIDIA Video2Face and Audio2Face, 2D and 3D AI-driven facial animation and rendering technologies.
There's one other place where NVIDIA's Omniverse Avatar has been integrated and that's NVIDIA's Drive Concierge, which is essentially a digital assistant with real-time conversational AI in your car. Head over here to check it out in action.
The dawn of intelligent virtual assistants has arrived. Omniverse Avatar combines NVIDIA’s foundational graphics, simulation and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far-reaching. -- Jensen Huang, founder and CEO of NVIDIA.
The Omniverse Avatar platform is currently under development and is part of NVIDIA’s Omniverse virtual world simulation and collaboration platform for 3D workflows.
Source: NVIDIA Blog and NVIDIA Newsroom