Feature Articles

The Brave Parallel Worlds of GPU Computing

By Vincent Chang - 16 Jan 2009

Upcoming Developments

DirectX 11 Does Compute

If one needs a sign that GPU computing is going to be the big thing for the next few years, the fact that DirectX 11 includes a Compute shader should be a form of endorsement. Indeed, Microsoft will include this functionality for its next major DirectX API refresh, but since it's an API for games and other multimedia applications, the Compute shader is focused on applications like physics effects, AI calculations, image post-processing and not so much about the high performance computing segment that NVIDIA is targeting with its CUDA-based Tesla supercomputer. No doubt, developers are free to use the API for other purposes themselves.

From what we have seen, the Compute shader is similar to what ATI Stream and CUDA does for their GPUs but obviously given that this is DirectX, we're sure that eventually all future GPUs will have to be certified and made to support it. Hence, it should end up being a 'standard' adopted by GPUs much like what OpenCL aspires to, but in a more limited (and proprietary) sense.

The new Compute shader in DirectX 11.

A hierarchy showing where everything stands.

One way where the Compute shader could help out is in the post-processing phase.

Not only would DirectX 11 force hardware vendors to play ball, it would also hopefully unify the divided scene for game physics between NVIDIA and the rest of the world. NVIDIA with its CUDA accelerated PhysX has once again been the more forceful vendor, chalking a few significant wins in games like Mirror's Edge and Unreal Tournament 3 while Havok (now acquired by Intel) has been used by other developers for their physics needs and primarily relies on the CPU to get the job done. With the Compute shader in DirectX 11, physics too may have a common API which developers can confidently code in and be sure that it will work on all supported GPUs.

DirectX 11 is expected to come as part of the upcoming Windows 7 operating system, but it may just be released earlier as Microsoft has mentioned that it would be applicable as an update to Windows Vista as well. All that DirectX 11 basically needs is a GPU supporting a unified shader architecture model, and that's already present in the current generation of hardware. DirectX 11 will surely bring about more features and better execution efficiency, but as far as supporting the Compute Shader is concerned for GP-GPU computing, current DirectX 10 class hardware is all it needs as a baseline to get things kick started. So unlike the state of DirectX 10 compliant GPUs back when Windows Vista launched, there will be a very large ready pool of hardware that will support DirectX 11 when it is made available (and when Windows 7 arrives).

There's More to Come

By now, you should probably have heard of Intel's attempt to compete in the discrete graphics segment with its Larrabee chip. While not all the details of its architecture have been disclosed, it's understood that it will also be capable of GPU computing thanks to an architecture that has been described as a hybrid of CPU and GPU. Instead of the dedicated hardware on modern GPUs from ATI and NVIDIA that are designed for graphics rendering, Larrabee is a many-core architecture based on general (and simpler), in-order, x86 processors that uses software to do most of the graphics rendering. For more information, we've talked more about why Intel is getting into this space and what Larrabee's basic architecture and capabilities are like in this Larrabee update article.

Without the actual hardware, we can only speculate about Larrabee but what's clear about this is that GPU computing is no longer just the domain of high performance computing applications. Every major hardware vendor in graphics is doing something in this area to tap the potential of the many 'cores' found in the GPU while software developers are still trying to grasp how to unlock the power of these cores through parallel programming models. The 'killer' app that will spark mass adoption may not have arrived yet but we're betting that it won't be too long now.

Meanwhile, we'll be doing a couple more in-depth stories about the exciting developments in GPU computing, with a feature on NVIDIA's CUDA upcoming. So stay tuned shortly for more news!

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.