The Power of 3 - Investigating the Trinity
Intel and NVIDIA have allied themselves in the battle against AMD with the introduction of the "Power of 3" platform. Simply put, it is any combination of Intel's P55 chipset and Core i5 processors, and NVIDIA's graphics card. Today, we are going to pit it against an all AMD system to see if it is really the superior setup.
By Kenny Yeo and Vijay Anand -
A New Trinity
While it is undeniable that ATI's new Radeon 5000 series graphics cards are the current talk of the town, we are taking the time today to take a look at the other side of the fence. The "green" side, if you will.
Despite the animosity between NVIDIA and Intel surrounding their chipset business (think ION), the two are still very much partners in other aspects. Recently, NVIDIA demonstrated the willingness to reconcile differences by licensing their SLI technology for use on Intel's latest X58 and P55 chipsets. This eventually led to NVIDIA's current ongoing "Power of 3" campaign.
You might be scratching your heads and wondering what's this "Power of 3" campaign. Quite simply, it refers to the combination of Intel's P55 chipset, Lynnfield processors and NVIDIA's GPUs. SLI used to be exclusive to NVIDIA's nForce chipset, making multi-GPU configurations on an Intel motherboard impossible. But now that SLI is licensed for use on the new X58 and P55 chipsets, multi-GPU setups with NVIDIA graphics cards are now possible on Intel's newest boards. And with this, NVIDIA is positioning it as a better solution to a comparable all AMD platform.
This is what makes up the Power of Three.
Hence today, we are going to explore the potential the Power of 3 brings, and whether it is truly the better solution for users. If you are currently on the look out for a powerful mid to high-end gaming system, you'd definitely want to read on.
Test Setup
To evaluate NVIDIA's Power of 3 platform, we've put together the best NVIDIA/Intel system and the best all AMD system available now. To further spice things up, we've also included a test bed which swaps the Lynnfield Core i7-870 processor for a less powerful Core i5-750 processor, and also another which swaps the GeForce GTX 295 for the Radeon HD 5870.
Obviously, the NVIDIA/Intel setup running on the i7-870 will be much costlier than our all AMD setup, so we wanted to include the more affordable Core i5-750 in our tests to see how a more competitively priced setup will perform. In fact, AMD's Phenom II X4 965 processor is evenly matched in price with the Core i5-750. Motherboard choices are aplenty but we're certain they can be similarly priced as well, so that takes care of the even playing field in terms of platform choices in this comparison. Also, we are also interested to see if using a much slower clocked Core i5-750 processor will have any discernible effect on performance when pitted against a much higher clocked AMD processor.
Lastly, we are swapping the GeForce GTX 295 on our Intel setup for the Radeon 5870 to see, across the same platform, which is the faster card. Also, please note that we are using newer graphics card drivers in this comparison than in our last graphics card review, and as such, these results are newer than those published earlier and cannot be directly compared.
Here are the complete system specifications:
- Intel Lynnfield Core i7-870 processor (2.93GHz) / Core i5-750 processor (2.66GHz)
- ASUS Sabertooth 55i motherboard
- NVIDIA GeForce GTX 295 graphics card (ForceWare 191.07)
- 2 x 2GB DDR3-1333 OCZ memory in dual channel mode
- Seagate 7200.10 200GB SATA hard drive
- Windows 7 Ultimate
- AMD Phenom II X4 965 processor (3.4GHz)
- MSI AMD 790GX-G65 motherboard
- ATI Radeon HD 5870 graphics card (Beta 8.66)
- 2 x 2GB DDR3-1333 Kingston memory in dual channel mode
- Seagate 7200.10 200GB SATA hard drive
- Windows 7 Ultimate
Our all AMD setup uses a Phenom II X4 955 processor mated to AMD's 790GX chipset, and is further complemented by the latest Radeon 5870 graphics card.
We'll be running the systems on the latest Windows 7 operating system along with our usual benchmarks, but with one new addition, and that is Unigine's Heaven benchmark. We are introducing this benchmark, because it's an excellent showcase of how tessellation on DirectX 11 works. We'll elaborate later, but first, here is the complete list of benchmarks used:
- Futuremark 3DMark06
- Futuremark 3DMark Vantage
- Crysis Warhead
- Far Cry 2
- Warhammer: Dawn of War 2
- "Heaven" from Unigine
3DMark06 Results
We begin our report with the time-tested 3DMark06 benchmark but this is just for your information as the newer Vantage is being used more often these days for comparisons. Here, the NVIDIA/Intel system powered by the i7-870 scored the highest, but the all AMD system was only a smidge behind. The i5-750 powered system rounded up the pack.
3DMark Vantage Results
On Vantage, it was clear that the dominance of the Power of Three cannot be denied, with both the i5-750 and i7-870 systems outperforming the AMD system by over 20%. Swapping the GeForce GTX 295 for the Radeon 5870 also tells us that the GeForce GTX 295 is the better performing card on Vantage. Still, these are synthetic benchmarks, so let's move on to real world gaming applications.
Crysis Warhead & Far Cry 2 Results
Crysis Warhead was one for the Power of 3 platforms, as they were about 12% faster than the all AMD system. Interestingly, we also noticed that it is your graphics card that largely determines what kind of performance you get on Crysis Warhead - note the similar performance between the two NVIDIA and ATI powered setups, and how the NVIDIA powered setups were better.
On Far Cry 2, the two NVIDIA/Intel systems were triumphant on the higher resolutions. And again, the two Intel-powered setups performed similarly, indicating that your choice of graphics card plays an important role. One thing we noted was that the all AMD system suffered most at the highest resolution, where the difference between it and the two Intel systems could be as much as 16%. To end, our final set of results, which we swapped the GeForce GTX 295 for the Radeon 5870, is a clear indication that the GeForce GTX 295 is the better performer on Far Cry 2.
Unigine Heaven Results
Unigine's Heaven benchmark is the one of the first benchmarks in the world to take full advantage of DirectX 11's tessellation feature. Running with tessellation disabled, we found that the NVIDIA/Intel systems were noticeably quicker when running at 1280 x 1024 and 1600 x 1200. At 1920 x 1440 however, their performance dipped considerably.
Running the benchmark with the Radeon 5870, we can fully appreciate the extra details that tessellation brings about. Here are some screenshots and a video.
Tessellation Disabled
While it is undeniable that tessellation adds a ton of details to the scene, the thing about such a feature is that unless you are aware, you won't know what's missing. For example, if we showed just the screenshot that wasn't tessellated and told you that tessellation was in fact enabled, would you have known we were pulling a fast one? That's the problem with tessellation as well as rest of the DirectX 11 features.
In terms of perception and final render output quality, there's really nothing that a DX9.0c level of game can't recreate what a DX10 or DX11 based game does. There's nothing to make one go "Wow" from a DX11 scene which could not have already been seen on a DX9.0c level of implementation. What DirectX 11 does is improving the efficiency of recreating these wonderful scenes and thus making it more commonplace than having more work done prior to DX11 that could possibly cause the game developer to consider dropping certain features or reduce the level of realism to make the game more playable on more hardware.
The thing to understand about tessellation is that it allows a much lower polygon count based model or mesh to be tessellated into a high resolution, high quality mesh, without the added memory hogging nature if the GPU were to process a high resolution mesh from the very beginning. And due to its much smaller memory footprint, it is potentially faster to render such a model.
Now back to this benchmark, if you have taken in what we've mentioned above, this Heaven benchmark from Unigine isn't a good one at all for contrasting the differentiation of running the benchmark in different DirectX standards. If you look at our own screenshots, the staircase is more like a ramp plastered with a rock texture when run in DirectX 9/10 or when having Tessellation off. Honestly, what game would even render such a simple thing such as a staircase with such a mismatch of reality?
So is the benchmark rubbish? Somewhat yes, but not entirely. The purpose of the benchmark is to show off just how well tessellation works - and that it does show. What it's not showing is running it in a true DX9.0c or DX10 code path to show the performance differential to bring the same quality levels as seen with tessellation turned on while running the DX11 code path. This means if DX9.0c or DX10 code path is set/chosen, the benchmark should have been able to render what we saw with DX11, but instead of tessellation working in the background, more complex meshes should have been used with differing level of details. All this means the memory footprint required on those older standards would have been very much heavier on those older processing paths, thus a bigger performance loss than when running it in DX11 while the rendered quality level remains similar across the board. Now this would have been the perfect benchmark but alas it's falling short on this aspect. To illustrate the point that even DirectX 9.0c can immerse one with equally stunning quality, here's a discucssion page Overclock.net showing Crysis in DX9 mode with Parallax ocullsion mapping .
Instead the Heaven benchmark from Unigene chose to maintain the same lower quality meshes when running in any DirectX standard. With that being the case, it's only natural that the DirectX 11 version with tessellation enabled showcased much better 'realism'. Here's another discussion thread from Overclock.net showing what we've just discussed upon. As a result, turning on tessellation 'seemed' to have an adverse effect on performance when the realism and quality levels aren't even the same to begin with.
Dawn of 2 Results
On Dawn of War 2, the two NVIDIA/Intel systems sprinted to an early lead against the AMD setup, but as we upped the resolution, the all AMD system clawed its way back with its consistent results. Also, as we ran the Radeon 5870 on the same i7-870 setup that the GeForce GTX 295 was running on earlier, it became apparent that the ATI card is the better performer on Dawn of War 2, just as it has been for sometime now.
Power Consumption
Powered by NVIDIA's dual-GPU GeForce GTX 295 monster, it is little wonder that the readings of the two Intel/NVIDIA systems are significantly higher compared to the ones with the Radeon 5870. As we've mentioned in our review of the Radeon 5870, the new card is extremely power efficient, so it is not surprising to see AMD leading in this segment.
Other Considerations
In terms of outright performance, the alliance of Intel and NVIDIA has the upper hand most of the time. The new Radeon 5870 might be fast, but NVIDIA's GeForce GTX 295 still has some tricks up its sleeves. But other than performance, there are other important considerations that one should take note as well.
On the graphics card front, both ATI and NVIDIA's latest offerings tout different features. For ATI, the new Radeon 5000 series boasts full support for the new DirectX 11 API and EyeFinity. And as discussed in our review of the Radeon 5870, DirectX 11 brings about a host of features, but tessellation is arguably the most attractive to the both the game developers and consumers as it boosts game realism with a massive memory overhead.
While DirectX 11 is sounds good and all in the long run, another consideration is the take up of DirectX 11 by game developers. As we speak, there are only a handful of announced DirectX 11 games. Games such as Dirt 2 and Sega's AVP are perhaps the two biggest names that support DirectX 11 and are set to be released in the near future. Realistically, it'll take at least a year before we get a decent library of DirectX 11 games, and by then, it is very likely that we'll be looking forward to a newer generation of graphics cards.
Of course, this is not to say that DirectX 11 compatibility is moot. Going with a new Radeon HD 5000 series card certainly future-proofs your system and that cannot be a bad thing. What's more, Eyefinity is a nifty feature to have. Gaming with three screens is an exhilarating experience, and additional real estate is always welcome when multi-tasking. But such an experience comes with an expensive investment of multi monitors, so it's not a readily accessible feature for all.
While NVIDIA's latest Fermi-based cards have yet to roll out, the current generation GT200 cards still have a lot going for them. 3DVision, for example, is something we really enjoyed (read our review ). Paired with the right games, it ups the fun factor and really improves the overall gaming experience.
Elsewhere, there is PhysX and CUDA. PhysX is a middleware physics engine, and as discussed in an earlier article , it vastly improves the gaming experience by making games react and interact more realistically. In the same vein, CUDA helps by accelerating tasks, such as video transcoding, which would normally take ages for traditional CPUs to complete. To add on, Adobe also announced that it takes advantage of CUDA to speed up its processes on its latest Adobe CS4 software, allowing professionals who use the software to increase their productivity. And there are many more CUDA enabled end-user applications coming by early par of next year. Of course they are not exactly the exclusive domain to NVIDIA as eventually these applications will use the DirectX Compute function of the new DirectX 11 API which is backward compatible to Direct X10 GPU pipelines as well.
Final Thoughts
As it stands, the three setups that we have are quite even matched. If absolute performance is your concern, then the NVIDIA/Intel Power of Three system powered by the i7-870 processor is the one to go for. Compared with the all AMD system, it is notably quicker for most games.
We also discovered, as we replaced the GeForce GTX 295 with the Radeon HD 5870 on the i7-870 setup, that the GeForce GTX 295 is generally still the faster card. It was able to outperform the Radeon HD 5870 on most of our benchmarks. There are certain exceptions to this at times such as the Dawn of War 2 game though.
But as things usually are, you pay for what you get, and it is little wonder that our top performing system is also the mostly costly. Price is a concern here as a single i7-870 processor costs a substantial US$550. Thankfully, the i5-750 processor is more affordable at US$199, but still provides comparable performance for gaming needs. The GeForce GTX 295 is still arguably the world's fastest single graphics card, and as such commands a premium as well at around US$500 a pop. For motherboards, it really depends on what features you want. The ASUS Sabertooth i55 we are using is one of the top P55 motherboards and as such costs more. But if you are willing to sacrifice on some features, relatively cheaper boards can be found and can compete with some of the AMD motherboards too.
The outright performance crown belongs to Intel and NVIDIA, but it comes at the cost of a very heavy price tag and power consumption. Plus it's not exactly 'future-proof', but it does have other experiential features to boot.
However, if you want a future proof, value-for-money, yet relatively powerful system, AMD is the one to turn to. The Phenom II X4 965 processor is slightly cheaper at US$195 and the Radeon HD 5870 goes for around US$379. Additionally, AMD 790GX motherboards are also usually cheaper than Intel P55 ones. Plus the power efficiency of this platform too can't be overlooked. Most interesting is when pairing the Radeon HD 5870 on the Intel platform which actually provided the best power efficiency of the lot.
At the end of the day, it really depends on what you want and how much you can stretch. The outright performance crown belongs to NVIDIA and Intel, whereas an all AMD setup offers great bang-for-buck and even some future-proofing with DirectX 11 support. You decide.
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.