Movie Magic with GPU Power & A Quick Chat with NVIDIA's CEO
In this last update from NVIDIA's GPU Tech Conference, we packed more videos and movie magic from Transformers and Harry Potter thanks to Lucasfilm and their use of GPUs to bring about increased realism. We also had a group chat session with Jen-Hsun to find out his thoughts in various verticals. So don't miss this interesting finale!
By Vijay Anand -
Lucasfilm's Movie Magic with the GPU
In our third and final day coverage of NVIDIA's GPU Technology Conference 2009, we were immersed in even more areas of movie production where GPUs make a huge impact in visualization and that's in the simulation department. And what better way to bring this point across than inviting Richard Kerris, CTO of Lucasfilm for the day's keynote presentation. With that, we share with you this clip that Lucasfilm had put together to showcase their very best movie animations in a video montage which then directly leads to Richard's introduction of the company:-
We were privileged to have Richard Kerris, CTO of Lucasfilm to come onboard this conference and share how they've evolved from production techniques and antique hardware (such as this snapshot in 1998)...
... to today's Lucasfilm entertainment group with up-to-date compute power and a global campus which are all interlinked directly for maximum efficiency.
Lucasfilm's goal of using GPUs in their production process is to benefit from simulation and visualization needs (be it animation or rendering), when GPUs can provide a distinct advantage over CPU based processes. Like everyone else, they too have a massive investment in existing work processes and render farms so it's only right that they find areas where GPUs give them that much more added boost in productivity before investing upon it since it's not an elixir kit. GPUs aren't always fast in everything offloaded to them as it depends on parallelism of the task. Plus, don't forget that a GPU's local memory is far more limited than that of the CPU. So processing certain types of large data sets might not be possible until unless they are further broken down.
Obviously, Lucasfilm has found synergies in tapping on to the power of the GPU, else they wouldn't be at his conference. An example they've pulled to show the power of the GPU is in physics simulation of the below fire scene from Harry Potter and the Half-Blood Prince. From their trials, Richard mentioned that it took 13 hours to process just one frame of the below scene on an 8-core CPU. When they moved this same simulation to be processed via just one GPU, that same one frame just took 10 seconds to complete the task! This is the kind of phenomenal gains one can obtain when making use of the GPU in the right context and finding the right jobs to suit its processing nature. And in cases like this, it makes tasks that were once relegated for offline processing to being close to real-time processing, thus greatly boosting efficiency.
Here's how multiple GPUs were configured to tackle separate slices and then later composite all the slices into a final render.
And in another example, Richard explained that they could traditionally simulate only up to 14,000 rigid bodies in a scene using multiple machines. Then moving over to a just a single machine using a single Quadro FX 4800 card, they managed get that number up to 100,000 rigid bodies at just little more time than the former setup. That's quite a massive boost in animation/simulation capabilities which they used to their advantage during the production of Transformers: Revenge of the Fallen. Here's the video that showcases this:-
Going beyond selective acceleration, Lucasfilm has merged the tools needed to visualize movies and games to the same tool and one of which is seen here in this snapshot. This tighter collaboration greatly speeds up the initial storyboarding, scripting and ideas generation as they can be quickly put to the test and be presented to the team leaders and the director. Time savings is paramount here and the ability to convey them fast and accurately is a definite win. Of course, just by looking at the tool, you can tell that the GPU is heavily involved here as well.
A convergence tool for visualization needs for both the gaming and movie production teams.
Lastly before we roll over to our discussion highlights with NVIDIA's CEO, Jen-Hsun Huang, we just had to share this slide of theirs:-
A Quick Chat with NVIDIA's CEO
Jen-Hsun Huang, CEO and Co-founder of NVIDIA corporation is quite unlike most other CEOs one would ever meet. As enthusiastic as ever with a pragmatic and positive mindset, he openly shared that he uses his most basic of instincts and skills formed in his youth to guide him as he manages his huge company with difficult decisions to make on a daily basis. We guess he's got great survivor instincts and willpower that has seen him through thus far and should serve him well even when moving forward.
Here we caught Jen-Hsun enthusiastically showing off his liking for Ferrari cars using this hologram from Zebra Imaging. NVIDIA GPUs are used in the 3D modeling of the car before its fit for use in this hologram. Now who would have thought such realistic holograms would have been possible 10 years ago?
After a quick group chat with the Asia-Pacific Editors, we've compiled the key findings and his thoughts in an easy to digest manner:-
Evolution of Games in the near-term
- All new games would end up supporting some form of physics simulation by some time in 2010.
- A lot of what's been showcased in the conference here has been revolving around physics processing or its related aspects.
- You've seen what PhysX can do for your game immersion and more games are steadily supporting this as new titles are released. Fast forward a year later and think about games without some form physics processing element - that sure sounds like a bland looking game.
What about Ray-Tracing in Games?
- Ray tracing may come in the further future, but not probably anytime soon. While it's easier to manage, it is far more computationally intensive. Plus, there isn't anyone pursuing this at the moment.
- Ray-tracing improves on visual quality, but it doesn't help with regards to immersive gaming where interactions with objects and environments provide the added level of realism and user involvement.
- For these reasons, physics simulation will come to games first as it has already made its mark and is a growing pool.
Consumer applications where GPU computing will aid most in the coming years
- Games
- Imaging tools (Adobe and NVIDIA are collaborating tightly in this space and more will be revealed later).
- Video processing
- Image and character recognition to enable new forms of data management
Tegra will be a part of the GP-GPU Computing initiative
- The development of the Tegra chip took some US$500 million.
- Its purpose is to fill a void for very low power multimedia computing needs in the form factors for mobile phones, PMPs, netbooks, notebooks and in many other forms of compact computing needs without the need to run x86 software.
- It supports CUDA as well, thus GP-GPU computing; but on lower PTX (parallel thread execution engine) standards that can be manipulated as a profile. This is where the NVIDIA Nexus development environment comes in.
Why the Fermi architecture coming out late won't matter to NVIDIA
- Its design priorities were that it must be a great GPU first and foremost; there's no reason to release a new GPU that doesn't offer a phenomenal advantage over the best of the current generation.
- Secondly, it must be pushed out as soon as possible, but this priority is secondary to the first one, in which it must be a great GPU.
- As such, it doesn't matter if NVIDIA misses out the December buying season.
The PC market segmentation going forward as envisioned by Jen-Hsun
- It will be split in three distinct categories.
- The first would target professionals and high-end content creators who probably tax the GPU far more than their CPUs.
- The second group would be high-end end-users who have very specific requirements and needs out of their PC. They are likely to use a balance of the CPU and GPU for their tasks.
- The third groups would be the mainstream users, who would eventually turn to cloud computing services and devices or systems that will enable internet access as they wouldn't have any specific needs; if they did, they would already have bought a reasonably spec'ed system.
- The traditional PC as we know it will go become a highly commoditized item, just they like how DVD players are. They are indispensible but are extremely cheap that they become part of a service bundle - just like phones and netbooks being given away for a subscription.
- This is why NVIDIA is investing in Tegra for mobility and compact devices as well as the Quadro and Tesla series for workstation and HPC needs as the Cloud will eventually serve the needs of the mainstream users and they need not require powerful compute devices with them.
- Discrete graphics will still be around, but it would probably not play as big a role to consumers as it would to corporations building services. Think about the recent beta launch of the OnLive service trials where powerful PC games like Crysis can be run off the computing cloud and the game itself can be played fluently on a low-level PC system or even on your TV.
- The face of personal computing is set for a change indeed for the mainstream users.
Any amount of compute power is never enough
- When programmable shaders were introduced in 2001, it was unheard of. It was even seen as a overkill back then. But within a couple of years, the entire industry was on to it as more were enlightened of its capabilities
- So by next year, you can expect all GPUs to handle physics processing.
- In 5 years time, expect the stuff you see in movies like those explosions and cool rendering effects to be made available in games.
- Already folks like LucasArts are using the same development tools as their Lucasfilm counterparts are using and this means accelerated game and movie development. This further enforces the fact that the movie-level of realism is certainly within reach given a few more years when processing power would have leaped much more to enable this.
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.