Event Coverage

HardwareZone's 10th Anniversary Special

Chronicling 10 Years of GPU Development - Part 2

Chronicling 10 Years of GPU Development - Part 2

The API Progression and Feature Advancements

However, the development of graphics cards is not just about sheer speed and power. All this time, there were also changes taking place beneath the surface - specifically the Application Programming Interface (API). Without delving into details, know that most games initially made use of the popular OpenGL API, until Microsoft came along with DirectX. DirectX was born because of Microsoft's intention to establish it as the 3D gaming API of choice with a fixed set of standards at any given iteration, so that game developers would be unified under a single API and design games around them easier.

It took a while, but eventually DirectX established itself as the de facto 3D gaming API and Microsoft continually worked about implementing new features that would benefit developers and gamers. DirectX 7.0 for instance, was a leap in 3D gaming, because it introduced hardware support for Transform & Lighting (T&L), which was previously handled by the CPU. DirectX 7.0, coupled with NVIDIA's now legendary GeForce 256 - the first card to support hardware T&L - helped to push the immersive level of 3D gaming to the next notch. Now that T&L functions are handled by the graphics processing unit, developers can create more realistic games with more complex scenes without worrying about overworking the CPU.

The real milestone moment in 3D gaming would be the introduction of DirectX 8.0. This revision implemented Programmable Shading, which allowed for custom transform and lighting and more effects at the pixel-level, thereby increasing the flexibility and graphics quality churned out. This was also coined as the Microsoft Shader Model 1.0 standard. DX8 was first embraced by NVIDIA in their GeForce 3 series of cards and was followed suit by ATI's Radeon 8500 series.

However it wasn't till the entry of DirectX 9 and the Shader Model 2.0 standard when the game developers happily adopted programmable shading routines more liberally as this new DirectX standard extended the capabilities of DX8 by leaps and bounds with even more flexibility and complex programming to yield the required effects. The legendary Radeon 9700 series was the first to support DX9 and was the only series for a long while to come.

 We're sure the gamers out there will remember this baby, the all-conquering Radeon 9700. It was so powerful, it could even handle games that came three years after its release.

These standards evolved yet again with the DX9.0c version that embraced Shader Model 3.0 and is now the base standard for most graphics cards and games design. Features such as High Dynamic Range Lighting, realistic shadows, instancing and more came to be supported in this revision and brings about very realistic game play. NVIDIA's GeForce 6800 series was first to support the SM3.0 model and the tables switched as ATI took a toll and wasn't able to offer an equivalent solution till the Radeon X1K series.

Yet another key moment was introduction of DirectX 10, which brought about the Unified Shader Model, which was once again first embraced by NVIDIA in their GeForce 8 series of cards and had a Unified Shading Architecture. The Unified Shader Model was revolutionary because it uses a consistent set of instructions across all shading operations. Traditionally, GPUs had dedicated units for different types of operations in the rendering pipeline, such as vertex processing and pixel shading, but in a graphics cards with a unified shading architecture, such processes can now be handled by any standard shader processing units. What this means is that in scenes where there is a heavier pixel workload than vertex workload, more resources could be dynamically allocated to run these pixel shader instructions. The end result is greater flexibility, performance and efficiency. ATI managed to catch up more than half a year later with similar support on their Radeon HD 2000 series.

 NVIDIA's 8-series of cards were the first to embrace DirectX 10.0. They also employ an Unified Shader Architecture, allowing superior performance over their rivals.

Going Beyond Gaming

Today, graphics cards continue to evolve and improve, and even more interesting developments await them. One of the most exciting things that have been discussed is general-purpose computing on graphics processing units (GPGPU), which involves appointing the GPUs to take on general computer tasks, thus increasing overall performance and efficiency of the system. This has been a challenge for engineers and programmers thus far because GPUs, as powerful as they are, excel only at certain floating point operations and lack the flexibility and precision to take on tasks that CPUs traditionally do. Because of their architectural differences, so have their APIs to address the different hardware. Thus with differing hardware and software, it is difficult to channel traditional workloads handled by the CPU to the GPU. However, if their power can be harnessed, we should be able to experience a tremendous increase in performance.

To put the raw power of a GPU into perspective: ATI's latest Radeon HD 4800 series of cards are capable of achieving in excess of 1 teraFLOPs, while the fastest of Intel processors - the quad-core QX9775 - can only manage 51 gigaFLOPs. Already, GPUs have proven that they are far more capable in helping to accelerate video decoding than CPUs and likewise in video transcoding tasks where the CPU could take many hours for what the GPU can finish off in the span of a tea break.

 The latest cards from ATI are reportedly capable of achieving over 1 teraFLOPs, much more than what the fastest of quad-core processors can achieve.

There are also talks from ATI and NVIDIA about creating the ultimate visual experience. What does this exactly mean? Simply put, think of combining the Wii's interactivity with movie-quality digital renders. It's all about putting the player in the forefront of the action. To get a better idea, we suggest you read what David Kirk , NVIDIA's Chief Scientist and John Taylor , AMD's Director for Product and Strategic Communications, had to say in our interviews with them.

Clearly, these are exciting times for graphics cards. Faster and powerful cards mean more realistic-looking games (think movie-quality), and GPGPU, if tackled correctly, could potentially unleash tremendous amounts of computing power. With so much in store, we can't wait to see where the next 10 years will take us.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.