News Categories

AMD unveils details on next-generation High Bandwidth Memory

By Koh Wanzi - on 22 May 2015, 8:58am

AMD unveils details on next-generation High Bandwidth Memory

Image Source: AMD

After many rumors, AMD finally confirmed that it would be implementing what it calls High Bandwidth Memory (HBM) in its next-generation of graphics cards at AMD’s 2015 Financial Analyst Day earlier this month. All we knew then was that HBM was a form of 3D memory which involved stacking DRAM dies on top of each other and locating them nearer the processor in order to increase memory bandwidth.

But that’s changed now as AMD has released concrete details on the new HBM architecture and the benefits it will bring. As its name suggests, HBM will bring vast improvements to memory bus width and bandwidth. Traditional GDDR5 memory runs on a 32-bit bus at up to 1750MHz, which translates into an effective bandwidth of up to 28GB/s per chip.

Each HBM stack will instead have a 1024-bit bus width, but with a lower clock speed of 500MHz. The wider bus width and stacked DRAM cores combine to produce a huge increase in bandwidth per watt, boosting it from 10.66GB/s for GDDR5 to over 35GB/s.

Image Source: AMD Community Blog

As games and other graphics applications grow more demanding, GDDR5’s ability to meet the increasing demands for more bandwidth is declining as the concomitant increase in power consumption is becoming unfeasible. HBM effectively resets the clock on memory power efficiency, offering more than 3.5x the bandwidth per watt.

Furthermore, the vertically-stacked structure of the memory chips mean that HBM can result in significant space savings on a circuit board. A 1GB chip of GDDR5 memory would measure around 24 x 28mm, but an equivalent 1GB stack of HBM would be almost 94% smaller at just 5 x 7mm.

Image Source: AMD Community Blog

The stacked memory chips use a new type of interconnects called through-silicon vias (TSVs) and microbumps to interface with one another vertically.

Image Source: Tom's Hardware

Four storage chips are stacked directly on top of a logic die, which is in turn attached to a silicon-based interposer. The interposer is also connected directly to the GPU, CPU or SoC die via the new TSV interconnects.

Image Source: AMD Community Blog

AMD claims that HBM will surpass all power, performance and form factor boundaries set by GDDR5, which has been the industry standard for the past seven years. NVIDIA’s 2016 Pascal GPU architecture will also feature stacked DRAM, so we’ll be seeing improvements in this area from both camps.

Either way, we’ll soon be able to test these claims when AMD’s next-generation GPUs are released later in the year.

Source: AMD Community Blog 

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.