To keep our results consistent with past graphics cards benchmarks, I have stuck with the following usual specifications:
Since AMD has openly pitted the RX 6900 XT against NVIDIA’s behemoth (literally) GeForce RTX 3090, I’ve pulled out our Founders Edition and put it to pace again with the latest GeForce ForceWare driver version 457.51 (at this point of writing). For comparison’s sake, I’ve also included the Radeon RX 6800 XT for good measure.
Here’s a list of the tool and games that I’ve chosen to benchmark all cards. The game genres were purposely varied to give us a sense of how the cards perform across a wide range of titles: shooters, action, strategy games, etc.
3DMark is a synthetic benchmark that tests graphics and computational performance at different resolutions, starting at 1080p and going all the way up to 4K. A series of two graphics test, one physics test, and then a combined test stresses your hardware in turn to assess its performance.
We see the Radeon RX 6900 XT trailing behind the GeForece RTX 3090 FE in both DirectX 12-based Time Spy tests, which to be honest, is not surprising. It’s the results in the Firestrike benchmarks, which tests both Radeon cards’ DirectX 11 capabilities that are more interesting, as we see the RTX 3090 FE falling behind both cards. It’s a repeat of the results between the RX 6800 XT and RTX 3080 Founders Edition, and it looks like the RX 6000 series cards just have better optimisations for the older API, despite modern games these days having ditched it in favour of DirectX 12.
A synthetic benchmark like 3DMark only tells one small part of the story though. Let’s look at some real-world game performances below.
I’ve said it before, and I’ll say it again. I think it's a huge waste of money for any decent gamer to use the new generation of flagship gaming cards to run games at 1080p - there are more suitable cards like the mid-entry RTX 3070 or RTX 3060 Ti cards. Nevertheless, for the purpose of benchmarking let’s find out how the Radeon RX 6900 XT fares at this resolution.
The results do speak for themselves here and there are some interesting highlights. For one, it’s not surprising to see the RTX 3090 FE outperforming both cards in most games – but the gap isn’t much at 1080p. What’s eye-catchy, is how the RX 6900 XT (and even the RX 6800 XT) is outperforming the GeForce card in the two games, Watch Dog Legion and Call of Duty: Black Ops Cold War, that are heavily optimised for NVIDIA cards. It looks like some questionable driver optimisations are at play here. It’s also worth noting that the RX 6800 XT isn’t all that far behind its more powerful sibling here.
As with the 1080p benchmarks, my 1440p tests also pushed graphics settings of each game title to the highest possible.
The results pretty much mirrored the results in the 1080p benchmarks although the margins that the Radeon card gained in the earlier tests have been narrowed. In short, you really can’t go wrong with the Radeon RX 6900 XT, or any of these cards for that matter, for 1440p gaming.
I have long concluded that with the entrance of the Radeon RX 6800 XT, AMD has finally entered the realm of 4K gaming proper. The RX 6900 XT then, is pretty much an extension of what the RX 6800 XT is already capable of.
Here we see the RX 6900 XT losing some rather significant ground to the RTX 3090 FE, with the former losing out at almost all the games here except for Call of Duty: Black Ops Cold War and Gears 5 – and just barely too. It’s not farfetched to claim that the GeForce RTX 3090 is still the dominant card for anything 4K and beyond.
My choices of games for ray trace benchmarking are more limited, as not all games with ray tracing work with both AMD and NVIDIA’s DXR. For instance, Wolfenstein: Youngblood’s ray tracing is only available for NVIDIA RTX cards, while Shadow of the Tomb Raider supports both. Anyway, given what we have seen so far in the standard benchmarks, it’s no surprise to see NVIDIA maintaining its lead in ray tracing performance.
NVIDIA's dominance in ray tracing isn't that surprising. They have had a two-year head start against AMD, after all. Both companies also approaches ray tracing differently: AMD does it by loading dedicated ray-tracing accelerators to its cards, while NVIDIA powers it with custom RT cores that are already in its second generation. Perhaps, just perhaps, one method works better than the other?