News
News Categories

AMD is making a fresh play for a share of the data center market with its new EPYC server chips

By Koh Wanzi - on 18 May 2017, 11:15am

AMD is making a fresh play for a share of the data center market with its new EPYC server chips

AMD EPYC

AMD’s return to the high-end CPU and GPU segments has been marked by ambitious forays into artificial intelligence and machine learning, and it is rounding that effort off with a play in the data center space as well.

At its Financial Analyst Day yesterday, the Silicon Valley company officially unveiled its Naples server processor under the new EPYC branding (AMD is clearly on a roll here with its name choices).

The "Naples" data center chips were first  officially announced earlier this year, but this is the first time that AMD took the wraps off the actual brand name.

The EPYC chip is massive, with a whopping 32 cores and 64 threads and 128 PCIe 3.0 lanes. The fundamental core design is based off AMD's Zen architecture (and here's more info) that's used on its recent desktop salvo with Ryzen. EPYC also uses a two-socket server design, and offers eight memory channels per socket for a total of 16 DDR4 channels and 32 DIMMs.

Furthermore, EPYC can actually take up to 4TB of memory in a two-socket server, which positions it to handle operations that consume gigabytes of memory.

Image Source: AMD

In comparison, Intel supports just four memory channels per socket – even if its memory controller might compensate in terms of better efficiency – and stratified its Xeon product in terms of single-, two-, and four-socket parts.

On the other hand, AMD says that all EPYC processors will have the same memory channels, I/O and capabilities, and the company is making a deliberate effort to avoid locking specific features to certain chips. It did this with Ryzen as well, where it enabled overclocking across the board for all SKUs.

Unsurprisingly, AMD wants EPYC to be used for machine learning and AI-related work, and it designed EPYC to have enough I/O bandwidth to handle multiple Radeon Instinct cards even when used in a single-socket configuration.

The 128 PCIe 3.0 lanes can drive a total of six cards using 16 lanes each, with 32 lanes remaining for intra-chip communication. AMD also took care to point out that other servers would require multi-socket systems to gain access to enough lanes to run the required number of GPUs.

AMD EPYC machine learning

For instance, Intel offers 80 PCIe 3.0 lanes in a two-socket system and just 40 in a single-socket one.

As it turns out, EPYC will offer 128 lanes whether it is used in a single- or two-socket configuration. The difference lies in the number of lanes given to intra-chip communication – in a two-socket setup, 64 PCIe 3.0 lanes will be dedicated to that instead of just 32, while each CPU serves up 64 lanes of chipset I/O.

That said, AMD still has a mountain to climb in grabbing a piece of the data center pie for itself. Intel has an effective monopoly over the market, but AMD could appeal to those who want to deploy as many cards as possible without shelling out for more expensive multi-socket solutions from Intel.

The first EPYC-based servers will launch in June with widespread support from original equipment manufacturers (OEMs) and channel partners as mentioned by AMD.

Source: AMD

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.