Feature Articles

Voodoo Beginnings - 10 Years of GPU Development

By Kenny Yeo - 15 Jan 2009

DirectX, GP-GPU and the Future

DirectX, GP-GPU and the Future

However, the development of graphics cards is not solely about sheer speed and power. All this time, there were also changes taking place beneath the surface - specifically the Application Programming Interface (API). Without delving into details, know that most games initially made use of the popular OpenGL API, until Microsoft came along with DirectX. DirectX was born because of Microsoft's intention to establish it as the 3D gaming API of choice with a fixed set of standards at any given iteration, so that game developers would be unified under a single API, which made designing games easier.

It took a while, but eventually DirectX established itself as the de facto 3D gaming API and Microsoft continually worked about implementing new features that would benefit developers and gamers. DirectX 7.0 for instance, was a leap in 3D gaming, because it introduced hardware support for Transform & Lighting (T&L) functions, which was previously handled by the CPU. DirectX 7.0 coupled with NVIDIA's now legendary GeForce 256 - the first card to support hardware T&L - helped to push the immersion level of 3D gaming to the next notch. Now that T&L functions are handled by the graphics processing unit, developers can create more realistic games with more complex scenes without worrying about overburdening the CPU.

The real milestone moment in 3D gaming would be the introduction of DirectX 8.0. This revision implemented Programmable Shading, which allowed for custom transform and lighting and more effects at the pixel-level, thereby increasing the flexibility and graphics quality churned out. This was also coined as the Microsoft Shader Model 1.0 standard. DX8 was first embraced by NVIDIA in their GeForce 3 series of cards and was followed suit by ATI's Radeon 8500 series.

However it wasn't till the entry of DirectX 9 and the Shader Model 2.0 standard that game developers adopted programmable shading routines more liberally as this new DirectX standard extended the capabilities of DX8 by leaps and bounds with even more flexibility and complex programming tools to yield the required effects. The legendary Radeon 9700 series was the first to support DX9 and was the only one to do so for a long while to come.

We're sure the gamers out there will remember this baby, the all-conquering Radeon 9700. It was so powerful, it could even handle games that came three years after its release.

These standards evolved yet again with the DX9.0c version that embraced Shader Model 3.0 and is now the minimum standard for graphics cards and games design. Features such as High Dynamic Range (HDR) Lighting, realistic shadows, instancing and more came to be supported in this revision and it brought about more realistic game play. NVIDIA's GeForce 6800 series was first to support the SM3.0 model and the tables switched as ATI wasn't able to offer an equivalent solution till the Radeon X1K series much later.

Yet another key moment was the introduction of DirectX 10, which brought about an unified shader programming model, which was once again first implemented by NVIDIA in their GeForce 8 series of cards which not only supported the unified shader programming model, but also physically had a Unified Shader architecture. This model was revolutionary because it breaks down the limitations of having specific types of shaders with the introduction of general purpose shaders in the GPU core.

Traditionally, GPUs had dedicated units for different types of operations in the rendering pipeline, such as vertex processing and pixel shading, but in a graphics cards with a unified shader architecture, such processes can now be handled by any standard shader processing units. What this means is that in scenes when there is a heavier pixel workload than vertex workload, more resources could be dynamically allocated to run these pixel shader instructions. The end result is greater flexibility, performance and efficiency and more significantly, opens the door for GPU computing. ATI managed to catch up more than half a year later with similar support on their Radeon HD 2000 series.

NVIDIA's 8-series of cards were the first to embrace DirectX 10.0. They also employ an Unified Shader Architecture, allowing superior performance over their rivals.


Today, graphics cards continue to evolve and improve while even more interesting developments are taking place. One of the more exciting things that have been discussed is general-purpose computing on graphics processing units (GP-GPU), which involves the GPUs taking on general computing tasks, thus increasing overall performance and efficiency of the system.

This has been a challenge for engineers and programmers thus far because GPUs, as powerful as they are, excel only at certain floating point operations and lack the flexibility and precision to take on tasks that CPUs traditionally do. Modern GPUs have bypassed this by having many less powerful general purpose 'stream' processors and with the development of an open compute language to bridge the architectural/hardware differences between ATI and NVIDIA, this is an exciting area of growth.

To put the raw power of a GPU into perspective: ATI's latest Radeon HD 4800 series of cards are capable of achieving in excess of 1 teraFLOPs, while the fastest of Intel processors - the quad-core QX9775 - can only manage 51 gigaFLOPs. Already, GPUs have proven that they are far more capable in helping to accelerate video decoding than CPUs and likewise in video transcoding tasks where the CPU could take many hours what the GPU can finish off in the span of a lunch break.

The latest cards from ATI are reportedly capable of achieving over 1 teraFLOPs, much more than what the fastest of quad-core processors can achieve.

There is also much buzz from ATI and NVIDIA about creating the ultimate visual experience. What does this exactly mean? Simply put, think of combining the Wii's interactivity with movie-quality digital renders. It's all about putting the player in the forefront of the action. To get a better idea, we suggest you read what David Kirk , NVIDIA's Chief Scientist and John Taylor , AMD's Director for Product and Strategic Communications, had to say in our interviews with them.

Clearly, these are exciting times for graphics cards. Faster and more powerful cards mean more realistic-looking games (think near movie-quality), and GP-GPU, if tackled correctly, could potentially unleash tremendous amounts of computing power. With so much in store, we can't wait to see where the next 10 years will take us. For now, we detail you a timeline of the last 10 years of GPU progression and that's up next following the jump.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.