AMD Radeon RX 5700 XT and 5700 review: Right back in the game
Navi has arrived at last.
Note: This article was first published on 7 July 2019.
Going after a slice of the mainstream pie
AMD's Navi architecture has seemed as distant as its namesake for the longest time, but the wait is over at last. The company's Radeon RX 5700 XT and 5700 turn a fresh page for AMD on the GPU front, ditching the longstanding Graphics Core Next (GCN) architecture in favor of a brand new design called RDNA.
According to AMD, RDNA (or Radeon DNA) is the start of a serious effort to compete at the highest echelons of the graphics card market. However, when AMD first announced its new Radeon cards ahead of E3 this year, it was obvious that the company was framing the Radeon RX 5700 XT and 5700 as the start of its return to the high-end. In other words, AMD wants to get there, but it still has some way to go, which should give you an idea of where the new Navi cards stand.
They aren't anywhere close to challenging the NVIDIA GeForce RTX 2080 Ti or even the 2080, but they are quite competitive in the mainstream segment currently occupied by the GeForce RTX 2060 Super and 2060. In fact, this was a late pivot on the part of AMD, announcing a US$50 price drop just a day ahead of launch. AMD originally thought the Radeon RX 5700 XT would be squaring up against the GeForce RTX 2070, but with the release of the GeForce RTX 2070 Super, the company clearly decided to position it against the GeForce RTX 2060 Super instead, which also costs the same at US$399.
No matter though. The mainstream segment is the largest part of the gaming GPU market and where the volume really is, so it's really good to see even more competition in that area. But before I dive into the results, here's a look at what's new with AMD's RDNA architecture and the new Radeon RX 5700 series.
A new Compute Unit
It's been seven long years and AMD is finally bidding goodbye to GCN. RDNA is supposed to be more efficient and do more work per clock cycle, and be overall better suited for gaming workloads.
At its core, it enables new instructions better suited for visual effects like volumetric lighting and features a redesigned Compute Unit (CU), improved multi-level cache hierarchy for lower latencies, and a more streamlined graphics pipeline. More specifically, AMD has rejigged the SIMD units (short for Single Instruction Multiple Data) as well, where the redesign prioritizes single-threaded performance and improves the effective IPC.
However, the biggest differences between GCN and RDNA can probably be boiled down to the changes AMD made to the CU. While AMD did introduce the Next-Generation Compute Unit (NCU) on Vega, the RDNA CU has a more gaming-oriented design.
Image Source: AMD
It doubles the instruction rate of GCN and features twice the number of scalar units and schedulers. This was implemented specifically to double the possible instruction rate in order to create a more efficient CU for standard graphics workloads encountered in games.
GCN also used SIMD16 units, which means they could process 16 instructions at a time. However, the four SIMD16 units in a single GCN execution unit was good for complex calculations but not the best for gaming, because they operated on a 4-cycle issue, which meant they couldn't process an instruction in a single clock cycle. On the other hand, the new RDNA execution unit uses two SIMD32 units instead with a single clock cycle issue, which allows for higher throughput and more efficient utilization of the GPU. With RDNA, there's no need to wait around for an instruction to pass through the whole four cycles.
This reduces latency while adding greater parallelism, where a 64 threads can be treated as two Wave32 instructions and executed in a single clock by the two SIMD32 units. RDNA added support for Wave32 execution in order to facilitate higher performance in games, but Wave64 is still supported.
The concept of resource pooling is also an important one, and it lets two adjacent CUs work together and function as something called a Work Group Processor, which improves parallelism and allows for access to up to four times the cache bandwidth.
Image Source: AMD
Better cache hierachy
RDNA also includes a new L1 cache design, with an extra 128kb of dedicated L1 cache for each of the four compute engines on the die. The load bandwidth from the L0 cache to ALU has also been doubled, in order to reduce the cache latency at each level and improve effective bandwidth. In addition, the extra L1 cache reduces the demand on the L2 cache as well, which further helps increases bandwidth somewhat.
Image Source: AMD
On top of that, the Delta Color Compression (DCC) algorithm has been improved and made available to the larger part of the cache subsystem. Shaders can now read and write compressed color data, while the display engine can also read compressed data in the frame buffer without needing to have it decompressed first. Overall, this boosts effective bandwidth throughout the GPU.
A new look for AMD
Meet AMD's new Navi cards.
The new Radeon RX 5700 XT and 5700 both feature GDDR6 memory and PCIe 4.0 support, making them the only consumer graphics cards at the moment to support the latest PCIe standard. PCIe 4.0 offers twice the interconnect bandwidth of PCIe 3.0, or up to 16GT/s per lane, but we're not yet at the point where single GPU configurations can really benefit from that.
That aside, the Radeon RX 5700 XT is clearly the one with the more eye-catching reference design. Unlike NVIDIA, which has implemented the same design for all its Founders Edition cards, the Radeon RX 5700 XT has a sleek, grooved chassis that kind of evokes some sort of futuristic body suit, while the Radeon RX 5700 is little more than a featureless gray slab. The latter also lacks any sort of backplate, leaving the PCB completely exposed.
Here's a look at the backplate on the Radeon RX 5700 XT.
Then there's that funny quirk in the side of the Radeon RX 5700 XT. At first it looks like an optical illusion, brought about by some clever kink in the grooved lines running along the card's side. But it's no mirage, and there's an actual dent in the side, adding a further bit of intrigue to the card's design. It's completely for show though and doesn't do anything apart from adding some visual interest to the card's design.
AMD Radeon RX 5700 XT.
Both cards are cooled by blower fans, which means they'll be pretty happy in small form factor cases since they dump air out the back. In terms of power connectors, they both also have the same requirements with one 8-pin and one 6-pin connector, and a 7-phase power delivery subsystem for overclocking.
When it comes to build quality though, there's no way they measure up to NVIDIA's Founders Edition cards. I wouldn't call them poorly built – they sure feel solid enough – but it's difficult to beat a card that has a full aluminum shroud.
AMD Radeon RX 5700.
Here's an overview of their specifications:
Radeon RX 5700 XT | Radeon RX 5700 | |
Compute Units | 40 | 36 |
Stream processors | 2,560 | 2,304 |
Base/Boost clock | 1,605MHz/1,905MHz | 1,465MHz/1,725MHz |
Game clock | 1,755MHz | 1,625MHz |
Memory | 8GB GDDR6 | 8GB GDDR6 |
Memory bus width | 256-bit | 256-bit |
Memory frequency | 14,000MHz | 14,000MHz |
Memory bandwidth | 448GB/s | 448GB/s |
ROPs | 64 | 64 |
Texture units | 256 | 256 |
TDP | 225W | 180W |
Price (USD) | $399 | $349 |
One new specification you'll notice is Game clock. It's exactly what it sounds like, and is intended as an indicator of the typical boost frequency you can expect from your average game. AMD wanted gamers to be crystal clear about what they would be getting from these cards, so the Game clock helps set expectations for what frequencies you should be getting while gaming. That said, AMD says you should still see clock speeds in excess of the Game clock pretty frequently.
The Game clock is also probably best compared with NVIDIA's boost clock, where both refer to sustained speeds achievable in game. On the other hand, AMD's boost clock is better thought of as something like a burst clock, which can be achieved during spikes in the graphics processing workload.
With the details out of the way though, it's finally time to take a look at their performance, which is probably what you've really been waiting for.
Test setup
The detailed specifications of our new graphics card testbed system is as follows:-
- Intel Core i7-8086K (4.0GHz, 12MB L3 cache)
- ASUS ROG Strix Maximus X Hero (Intel Z370)
- 4 x 8GB G.Skill Ripjaws V DDR4-3000 (Auto timings: CAS 15-15-15-35)
- Samsung 860 EVO 500GB SSD
- Windows 10 Pro 64-bit
- ASUS PB287Q, 4K monitor
The full line-up of graphics cards tested are listed below:
- NVIDIA GeForce RTX 2070 Super Founders Edition
- AMD Radeon RX 5700 XT
- NVIDIA GeForce RTX 2070 Founders Edition
- AMD Radeon RX 5700
- NVIDIA GeForce RTX 2060 Super Founders Edition
- NVIDIA GeForce RTX 2060 Founders Edition
[hwzcompare]
[products=668463,668508,650161,668510,668475,655676]
[width=200]
[caption=Test cards compared]
[showprices=1]
[/hwzcompare]
Next up, here's a list of all the benchmarks used:
- 3DMark
- Ashes of the Singularity: Escalation
- Deus Ex: Mankind Divided
- Far Cry 5
- Metro Exodus
- Middle-earth: Shadow of War
- Shadow of the Tomb Raider
- Tom Clancy's The Division 2
3DMark
The synthetic 3DMark benchmark tests graphics and computational performance at different resolutions, starting at 1080p and going all the way up to 4K. A series of two graphics test, one physics test, and then a combined test stresses your hardware in turn to assess its performance.
The scores are pretty close between the competing cards here, with the Radeon RX 5700 XT coming quite close to the GeForce RTX 2070 Super and the Radeon RX 5700 doing the same to the GeForce RTX 2060 Super. In Fire Strike Extreme, the 5700 XT was around 11 percent faster than the GeForce RTX 2060 Super, while the Radeon RX 5700 was 16 percent ahead of the GeForce RTX 2060.
Ashes of the Singularity: Escalation
Ashes of the Singularity has long been the poster child for the performance benefits a low-level API like DirectX 12 can bring. It is based on the Nitrous engine and can be extremely punishing thanks to the huge number of onscreen units and the sheer level of detail accorded to each unit. However, the CPU does become the limiting factor at lower resolutions and settings.
As in 3DMark, the Radeon RX 5700 XT and 5700 were neck-and-neck with the GeForce RTX 2070 Super and 2060 Super respectively. However, it's important to remember that the Radeon RX 5700 is actually the same price as the GeForce RTX 2060, and it's actually a good 13 percent faster. Similarly, at the more demanding 1440p resolution, the Radeon RX 5700 XT was roughly 10 percent ahead of the GeForce RTX 2060 Super, which also cost US$399.
Deus Ex: Mankind Divided
Mankind Divided features just about every trick to make your game look pretty, including things like volumetric and dynamic lighting, screenspace reflections, and cloth physics. Even though it was released in 2016, the game is capable of bringing even the most powerful systems to their knees.
The same pattern is observed here as well, and the Radeon RX 5700 XT and 5700 continued to beat the GeForce RTX 2060 Super and 2060 at the same price. For example, at the 1440p resolution and Ultra settings, the Radeon RX 5700 XT was 11 percent faster than the GeForce RTX 2060 Super, while the Radeon RX 5700 was a solid 15 percent ahead of the GeForce RTX 2060.
Far Cry 5
Far Cry 5 doesn't seem to scale that well between different graphics cards and settings. At the 4K resolution, the GeForce RTX 2070, Radeon RX 5700, and GeForce RTX 2060 Super even turned out the same result, which is odd for the demanding 4K resolution where the graphics card is supposed to be the limiting factor. Either way, it speaks to how close in performance many of the cards here are.
However, the Radeon RX 5700 had a particularly standout performance against the GeForce RTX 2060 here (again, remember that both cards cost the same). At 1440p and Ultra settings, the AMD card was a whopping 25 percent faster.
Metro Exodus
Metro Exodus runs better in DirectX 12, so that's the setting we chose to run our benchmarks at. There's just one caveat though – actual in-game performance is generally better than the results you get in the in-game benchmark, so this is best taken as an indicator of relative performance, rather than the absolute numbers you can expect in game.
The Radeon cards continued to edge out their similarly priced NVIDIA counterparts here, with the Radeon RX 5700 XT beating the GeForce RTX 2060 Super by nearly 14 percent at 1440p and Ultra settings. Likewise, the Radeon RX 5700 was ahead of the GeForce RTX 2060 by a good 22 percent.
Middle-earth: Shadow of War
Shadow of War offered NVIDIA a chance to fight back, and the GeForce RTX 2060 Super traded blows with the Radeon RX 5700 XT at different settings and resolutions. The Radeon RX 5700 still had a clear lead over the GeForce RTX 2060 though, particularly at the less demanding settings.
Shadow of the Tomb Raider
Like Metro Exodus, Shadow of the Tomb Raider runs better in DirectX 12 as well. DLSS and ray tracing have been added to the game already, but these numbers are obtained without those features turned on.
Performance was really close between the GeForce RTX 2060 Super and Radeon RX 5700 XT, and it was difficult to tease out significant differences across the board. That said, at 1440p and Highest settings, the Radeon RX 5700 still raced ahead to a 26 percent lead over the GeForce RTX 2060.
Tom Clancy's The Division 2
Division 2 is another new addition to our benchmark suite, and it replaces 2016's The Division. We've also shifted to DirectX 12 here as well because of the performance gains offered by the low-level API.
Once again, AMD has the advantage here when comparing directly for price. Still, the difference was smaller at Ultra settings. The Radeon RX 5700 also continued to command a significant lead over the GeForce RTX 2060, posting a good 15 percent advantage at 1440p and High settings.
Temperature and power consumption
The temperature measurements were obtained after running 40 loops of 3DMark Fire Strike Extreme's stress test and checking the peak sustained temperature. Similarly, the total system power consumption figures were read off a power meter during a run of Fire Strike Extreme.
NVIDIA's cooling solution proved more effective than AMD's, with considerably lower temperatures across the board. Idle power consumption numbers were good, although the Radeon RX 5700 XT drew more in Fire Strike Extreme than the GeForce RTX 2060 Super.
"Jebaited!"
Competition is definitely heating up.
I'm not sure anyone was really that excited about AMD's Navi GPUs (as opposed to its Ryzen 3000 processors), but I think there's actually quite an interesting story to tell here. After yesterday's price drop, the Radeon RX 5700 XT and 5700 suddenly became a whole lot more intriguing. They now cost S$635 and S$535 respectively, well within the range of custom NVIDIA partner cards.
While AMD seemed to originally be setting up the Radeon RX 5700 XT for a comparison with the GeForce RTX 2070, and the Radeon RX 5700 for the GeForce RTX 2060, that's completely changed now. One way of looking at the price slash is that the company was simply caught off guard by the release of NVIDIA's Super cards, desperately trying to remain competitive. But there may be something else going on here as well, and maybe a juicy bit of cutthroat corporate rivalry.
The Radeon RX 5700 XT and Radeon RX 57000 consistently outperformed the GeForce RTX 2060 Super and 2060.
On 4 July, AMD's Scott Herkelman tweeted out "Jebaited", a cryptic message that suddenly makes a lot of sense. If you're one for drama, you could argue that AMD baited NVIDIA into pricing the Super cards the way it did, all the while planning to drop the price close to launch.
Still, regardless of whether AMD really is as canny as Herkelman's tweet suggests, the Radeon RX 5700 XT and 5700 are both fearsome contenders in the mainstream. At virtually the same price as the NVIDIA GeForce RTX 2060 Super and 2060, they are consistently faster than both NVIDIA cards. That makes for some stiff competition, and if I were NVIDIA I'd be pretty worried.
However, it's worth mentioning that NVIDIA still has the upper-hand when it comes to real-time ray-tracing. At the moment, that isn't a super big deal since not many titles use it, but E3 saw a slate of AAA games announce support for the feature, including hotly anticipated titles like Cyberpunk 2077, Watchdogs: Legion, Call of Duty: Modern Warfare, and Atomic Heart. When 2020 rolls around, it's possible that the Radeon RX 5700 XT and 5700's lack of ray tracing acceleration will feel slightly more glaring.
But while it might take some time for people to get used to thinking of AMD as a viable option for a gaming graphics card again, it no longer seems like such a stretch. You can worry all you want about future-proofing, but that concept is also at odds with yearly hardware release cycles, and the fact remains that these AMD cards are very, very good.
The GeForce GTX 1060 is NVIDIA's best-selling GPU, according to Steam's hardware survey, so it's clear that the majority of consumer dollars are still going toward mid-range GPUs. The GeForce GTX 1060 is also going on three-years-old, so many folks who bought the card then may very well be on the lookout for an upgrade. When that happens, there's a good chance that they'll run into Navi's waiting arms.
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.