At CES 2019, AMD unveiled the Radeon VII graphics card, a surprise move that many of us were not expecting. I had thought that the next time I saw a high-end card from AMD, it would be when the company announced its Navi architecture (which is still under wraps), but AMD is clearly not willing to stay quiet for so long.
The new Radeon VII is the world’s first consumer GPU manufactured on a 7nm process technology. It is based on AMD’s existing Vega architecture, and the company is positioning it as a card that can handle 4K gaming at the highest settings. That means that it should be going up against the GeForce RTX 2080, at the very least, an exciting prospect since it’s been a while since AMD released a card that was capable of competing with NVIDIA’s best.
AMD's reference model sports a triple-fan cooler display outputs comprising three DisplayPort and one HDMI connector, which is pretty par for the course these days. Its metal construction gives the card a really solid feel, but I have to say that it still doesn't feel as good as NVIDIA's Founders Edition models.
The new 7nm process technology (NVIDIA’s Turing cards are still based on 14nm) is a core part of how AMD managed to improve performance on the Vega architecture. For starters, the company was able to shrink the Vega GPU die from 495mm2 to 331mm2 , thus creating the space for two additional stacks of HBM2 memory and bringing the total amount of memory to 16GB. In this manner, the company effectively doubled the memory bandwidth of the Radeon RX Vega 64, and the Radeon VII now boasts a total memory bandwidth of 1TB/s.
On top of that, AMD cited optimizations to increase frequencies and reduce latencies, while increasing the bandwidth for the render output units to offer improved gaming performance.
The memory requirements for popular games have increased significantly over the years, so AMD thinks the Radeon VII’s generous 16GB of HBM2 memory and 4,096-bit memory bus will help accommodate the high-resolution textures found in many modern games. According to AMD, while the larger frame buffer may not make that much of a difference if you’re just looking at average frame rates, it can supposedly deliver a more consistent frame rate, which may give a smoother experience.
In addition, the card can also take advantage of AMD's High Bandwidth Cache Controller (HBCC), which reserves a portion of system memory for use by the GPU, effectively expanding the available VRAM. The HBCC then manages the migration of data between the card's VRAM and the system memory.
To sum up, here’s a look at how it stacks up against the previous Vega cards:-
|Radeon VII||Radeon RX Vega 64||Radeon RX Vega 56|
|Architecture codename||Vega 20||Vega 10||Vega 10|
|Transistor count||13.2 billion||12.5 billion||12.5 billion|
|Next Gen Compute Units||60||64||56|
|High Bandwidth Cache (HBM2)||16GB||8GB||8GB|
|Memory bus width||4,096-bit||2,048-bit||2,048-bit|
Temperature monitoring capabilities have also been improved on the Radeon VII. Traditionally, a single sensor is placed in the vicinity of the legacy thermal diode, and readings from this sensor give what we see reported as the GPU temperature. This temperature is then used for fan control and to implement thermal throttling policies.
However, AMD says the Radeon VII now features a network of thermal sensors across the GPU die. The maximum temperature across the entire GPU die then gives something called the junction temperature, and it’s based on the temperature measuring circuitry across the die.
The Radeon VII GPU features 64 temperature sensors distributed across the chip, which is twice the number found on the Vega 64. The card also uses junction temperature for implementing thermal throttling and fan control, which confers several benefits.
First off, this supposedly allows the GPU to more reliably maximize its performance potential, instead of prematurely throttling performance according to reported temperatures from the hottest parts of the chip. In addition, AMD says junction temperatures provide a more effective control point for throttling and improve the reliability of the chips in question. Gamers will be able to view both junction and GPU temperature in Radeon Wattman, which gives them more insight into and control over their GPU.