Event Coverage

HardwareZone's 10th Anniversary Special

Chronicling 10 Years of GPU Development - Part 1

Chronicling 10 Years of GPU Development - Part 1

Unless you are an ardent gamer, you might not think much of the graphics card that is sitting right now on your computer's motherboard. You might even mistake it as an unnecessary piece of equipment and think that it is only used by gamers. However, graphics cards today are used more than just for gaming. In fact, today, they are used for other processes such as accelerating HD video playback, video transcoding, accelerating the viewing of PDF documents and many other tasks. Truly, they have come a long way from where they were 10 years ago.

Just slightly more than a decade ago, the first commercially successful 3D graphics card was released by 3dfx, the Voodoo Graphics card. It was so all-conquering that along with its successor, the Voodoo2, 3dfx was able to dominate the graphics card upgrade market for the next few years to come. In fact the best graphics card subsystem back then were an STB Lightspeed 128 + the Voodoo for the best in 2D and 3D capabilities. The more professional folks would have chosen the Matrox equivalent for better 2D quality.

 This was probably one of the earliest cards we've ever reviewed - The Canopus Pure3D II 12 MB. It was also one of the fastest Voodoo 2 cards around.

However with the 3D fever gripping strong, long-time graphics suppliers like S3, Trident, Matrox and many others found it hard to keep up with the sudden change in demands/requirements and were gradually dropping out of the heated competition. It soon became a three-way contest between 3dfx, ATI and NVIDIA. Sadly, 3dfx was also showing signs of waning, and in 2000, were eventually acquired by its arch-rival NVIDIA.

There are many reasons for the demise of 3dfx and its highly popular following. For instance, instead of choosing short development cycles like ATI and NVIDIA, 3dfx pursued lengthy, ambitious ones. This strategy eventually backfired as they were unable to keep up with the rapid advances made by their rivals. Also, their ambitious development cycles meant that they neglected certain segments of the market, namely the mid and low-end segments. NVIDIA, especially, was able to capitalize on this with their affordable yet powerful GeForce 2 MX, and ATI with their Radeon VE.

The eventual collapse of 3dfx meant that ATI and NVIDIA were the only dominant players left in the market, and the honor of the having the fastest card title swung consistently back and forth between the two even till today.


Interface Upgrades

While all this was happening, there were also other major changes. One such change was the way graphics cards communicated with motherboards. Over the last 10 years, in order to satisfy our needs for greater bandwidth, we have transitioned through three major changes in the interface used - from PCI to AGP and finally, PCI Express.

As graphics cards got speedier, a quicker, more efficient link was needed to take full advantage of it and prevent the interface from being the bottleneck. As such, it came to a point, some time in 1997, where the humble PCI slot could no longer provide the bandwidth that was needed. To address this problem, Intel came up with the AGP (Accelerated Graphics Port) slot, which at its peak, could offer a bandwidth of up to 2GB/s. This was a dramatic improvement over that of PCI, which could only manage a maximum of 133MB/s. However, considering the rate at which graphics cards were evolving, it soon became evident that AGP wasn't future proof.

As a result, PCI Express was introduced by Intel in 2004. PCI-Express was markedly different from its predecessors in that it was structured around pairs of serial (1-bit), unidirectional point-to-point connections known as "lanes". In the first iteration of PCI Express (PCIe 1.1), each lane could send data at a rate of 250MB/s in each direction. The total bandwidth of a PCI Express slot was then determined by the number of lanes it has. A PCIe x1 slot would have a total bandwidth of 250MB/s, whereas a PCIe x16 slot (the largest of its kind) would have a total bandwidth of 4GB/s. The PCIe x16 variant became the modern de facto for graphics cards.

However with the ever faster graphics subsystem, it also meant that for PCI Express to be viable in future, revisions would be needed and thankfully the PCI-SIG consortium ensured that PCIe was designed with the future in mind and was extensible. In January last year, the second generation PCI Express 2.0 (PCIe 2.0), was born. The key advantage that PCIe 2.0 had over PCIe 1.1 was that data could now be sent at double the rate, meaning 500MB/s in each direction. This meant that an x16 slot could now transmit data at an amazing 8GB/s, four times greater than that of the fastest AGP iteration.

More recently, technologies such as CrossFire and SLI are taking advantage of PCI Express to give gamers that extra boost in performance. Thanks to PCI Express, multiple graphics cards riding on the same motherboard are now possible because being a point-to-point connection, it doesn't have to wait for the connection to free up nor require complex handshaking protocols. As such, multiple graphics cards can now communicate simultaneously among themselves and with the processor, which in turn results in much higher frame rates and a more immersive gaming experience.

 SLI Cometh! With SLI, gamers could now stack two graphics cards together for added graphics processing power.

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.