We received the GeCube Radeon HD 3870 512MB GDDR4 (O.C. Edition) for testing, and found it to be good value for money. It had good overclocking potential and if tweaked properly, could offer almost 8800 GT levels of performance.
One of the most impressive cards based on the Radeon HD 3870 GPU, however, was the HIS Radeon HD 3870 X2 1GB . As the 'X2' in the name suggests, it is a dual GPU card, but unlike other dual GPU cards, it features 2 GPUs on a single PCB - something not seen in a long time. As expected, it was quick and we found it hard not to recommend it to anyone with deep enough pockets.
All in all, we found both technologies to be effective at reducing CPU utilization when playing HD content, to the point where even an old P4 system could handle HD content comfortably. There were, however, some interesting discrepancies. Chief among them was that NVIDIA's PureVideo HD seemed to be work less effectively on their lower-end cards.
Later, we decided to put ATI's new mainstream GPUs - the HD 3650 and HD 3450 - under further scrutiny, to see how they compared to the cards in our earlier HD decoding tests. Considering that these new cards were basically shrunken versions of their older incarnations, it was unsurprising to find that their performance was almost similar.
We tested both ATI Radeon HD 4850 512MB GDDR3 and ATI Radeon HD 4870 512MB GDDR5 , and were delighted by the performance they offered. By themselves they were competent performers, and should you need extra juice, just put them in CrossFireX mode. In fact, the HD 4870 was so good that in CrossFireX, it could trump the GTX280. And to add salt to injury, each HD 4870 retailed at only US$299, making it cheaper to get two HD 4870s than a single GTX280. In response, NVIDIA had no choice but to slash prices of their GTX200 series of cards. After being out in the wilderness for so long, ATI was finally back in contention.
NVIDIA has been very vocal about the prospects of GP-GPU, going so far as to say that GPUs will one day render CPUs redundant. To back this claim, they touted their set of development tools called Compute Unified Device Architecture (CUDA), first released two years back, which will allow developers to code and optimize programs for execution on GPUs. This is still very much in its infancy but the move towards GPU computing took a giant step with the recent introduction of OpenCL, which is an open API for GPU compute supported by many companies and will hopefully bring about more developments in this area. Next generation's DirectX 11 too will bring about more support for GP-GPU initiatives, so it's just a matter of time before the GPU is fully unleashed beyond its current normal functions.