Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
Feature Articles

Voodoo Beginnings - 10 Years of GPU Development

By Kenny Yeo - 15 Jan 2009

Timeline: 2008

2008


  • NVIDIA kicked off 2008 with a bang as it announced its Hybrid SLI solution. What this does is that it allowed discrete graphics cards to work in tandem with integrated graphics solutions in the same way two discrete graphics cards would in the more traditional SLI formation (though it has a few pairing restrictions).
  • By this time, it became clear that ATI couldn't hope to match NVIDIA on pure performance alone. In light of this, card manufacturers sought different ways to improve and differentiate their products from the reference design and one popular way was to overclock the cards.

    We received the GeCube Radeon HD 3870 512MB GDDR4 (O.C. Edition) for testing, and found it to be good value for money. It had good overclocking potential and if tweaked properly, could offer almost 8800 GT levels of performance.

    One of the most impressive cards based on the Radeon HD 3870 GPU, however, was the HIS Radeon HD 3870 X2 1GB . As the 'X2' in the name suggests, it is a dual GPU card, but unlike other dual GPU cards, it features 2 GPUs on a single PCB - something not seen in a long time. As expected, it was quick and we found it hard not to recommend it to anyone with deep enough pockets.

Ah, yet another dual-GPU card - something we haven't seen in a while. Although this card was as big as the 8800 GTX, it was much heavier, tipping the scales at 1.1kg.

  • With HD content becoming more prevalent on the web, we decided to investigate ATI's and NVIDIA's hardware solutions for HD decoding in the The Great Avivo/PureVideo HD Showdown .

    All in all, we found both technologies to be effective at reducing CPU utilization when playing HD content, to the point where even an old P4 system could handle HD content comfortably. There were, however, some interesting discrepancies. Chief among them was that NVIDIA's PureVideo HD seemed to be work less effectively on their lower-end cards.

    Later, we decided to put ATI's new mainstream GPUs - the HD 3650 and HD 3450 - under further scrutiny, to see how they compared to the cards in our earlier HD decoding tests. Considering that these new cards were basically shrunken versions of their older incarnations, it was unsurprising to find that their performance was almost similar.

  • In response to the dual GPU HD 3870 X2, NVIDIA released its much-anticipated GeForce 9800 GX2 1GB, which we happily tested in our labs. Needless to say, it was quick as lightning and completely pummeled the HD 3870 X2. However, its high price meant that it was almost exclusively a card for the hardest of the hardcore enthusiasts.

NVIDIA, never one to seat back, went on the offensive with the 9800 X2. However, rather than put two GPUs on a single PCB, they went the way of the 7950 GX2 instead, sandwiching two cards together.

  • Not long after, NVIDIA launched their new flagship, the NVIDIA GeForce GTX 280 1GB GDDR3 . As usual, we had this in our labs and we put it to the test to see just how good it is. Unsurprisingly, it was the fastest single card we've ever tested, and could handle all but the most demanding games with ease. However, it did, in some tests, lose out to the older dual-GPU 9800 GX2, and considering its stratospheric price (US$649 at launch) plus the fact that a less powerful variant, the GTX260 is available for much less, the GTX280 is something most would definitely think twice about before buying.

This is NVIDIA's latest flagship, the GTX280. It is frighteningly fast and carries an equally frighteningly price tag.

  • In response, ATI introduced their new HD 4800 series of mainstream performance cards. The launch of these cards clearly signifies a change in strategy from ATI, deciding to focus on the mainstream segment instead of the high-end, flagship GPU.

    We tested both ATI Radeon HD 4850 512MB GDDR3 and ATI Radeon HD 4870 512MB GDDR5 , and were delighted by the performance they offered. By themselves they were competent performers, and should you need extra juice, just put them in CrossFireX mode. In fact, the HD 4870 was so good that in CrossFireX, it could trump the GTX280. And to add salt to injury, each HD 4870 retailed at only US$299, making it cheaper to get two HD 4870s than a single GTX280. In response, NVIDIA had no choice but to slash prices of their GTX200 series of cards. After being out in the wilderness for so long, ATI was finally back in contention.

  • Not satisfied with the advantage gained with their new Radeon HD 4800 series of cards, ATI went on the offensive and for the first time in recent memory, reclaimed the speed crown thanks to its dual-GPU monster, the Radeon HD 4870 X2 . The GPU itself doesn't differ too much from the 3000 series, but the current 4000 series has a much more capable audio controller integrated to process HD Audio streams for HDMI output, plus the core graphics crunching horsepower was greatly bolstered with more processing units and a more efficient memory controller.

The current speed king is the dual-GPU Radeon HD 4870 X2, proving that sometimes, two heads are indeed better than one.

  • Recently there has been increased attention and discussion about general-purpose computing on graphics processing units (GP-GPU). However, this was not without its problems. While GPUs might be inherently powerful, they were designed specifically to tackle graphics and though capable of general computing, writing data parallel programs to suit their multiple processing units was not an easy task.

    NVIDIA has been very vocal about the prospects of GP-GPU, going so far as to say that GPUs will one day render CPUs redundant. To back this claim, they touted their set of development tools called Compute Unified Device Architecture (CUDA), first released two years back, which will allow developers to code and optimize programs for execution on GPUs. This is still very much in its infancy but the move towards GPU computing took a giant step with the recent introduction of OpenCL, which is an open API for GPU compute supported by many companies and will hopefully bring about more developments in this area. Next generation's DirectX 11 too will bring about more support for GP-GPU initiatives, so it's just a matter of time before the GPU is fully unleashed beyond its current normal functions.

  • Intel too is waiting to enter the game and is developing a GPU themselves under the codename "Larrabee". Larrabee, according to Intel, is designed mainly for GP-GPU optimization and high-performance users. They expect a working sample to be completed by the year of 2008, after which it will be released to the public in late 2009 or 2010.