facebook rss twitter

Review: AMD Epyc 7742 2P Rome Server

by Tarinder Sandhu on 8 August 2019, 00:01


Quick Link: HEXUS.net/qaecd4

Add to My Vault: x


AMD made a long-awaited return to the server CPU market with the release of first-generation Epyc processors in 2017. At that time, with practically no market-share to speak of, the only way was up. It helped that Epyc, built on the foundations of the all-new Zen architecture, proved its performance mettle from the off. Scaling to 32 cores and 64 threads and available in a 2P configuration, general server-application performance was excellent, oftentimes matching or beating the incumbent Xeons from Intel, which had held dominance for over 10 years.

First-generation Epyc, codenamed Naples, is the first of a trilogy of high-performance server processors unveiled by AMD. The roadmap delineates a 2019 follow-up architecture, codenamed Rome, and a subsequent enhancement known as Milan. The trio, unusually, uses the same SP3 socket for platform longevity and ease of upgrade, albeit not all new features present on newer Epycs are available on all boards.

Second-generation Epyc, productised as 7002 Series, has a number of manifest improvements over the original design. Chief amongst these is the harnessing of the latest Zen 2 architecture and production on a leading-edge 7nm process, with the former providing a cleaner SoC implementation and an across-the-board 15 per cent IPC performance uplift - more in floating-point-intensive programs, thanks to a doubling of the AVX vector width - while the use of TSMC's advanced geometry node enables AMD to build chips with up to double the cores and threads without socket power spiralling. And then there's PCIe 4.0 support for good futureproofing measure, too.

The upshot of this two-pronged approach - architecture and process - sees AMD enjoy substantial performance gains from one Epyc generation to the next. There's practically double the amount of per-socket potential as before, and such wholesale improvements bring the whole 1P vs. 2P server discussion into sharper relief. Those used to buying 2P solutions do so because they did so; now AMD offers previously class-leading 2P performance in a 1P package, and for those that really need it, 2P muscle hither-to unseen in this space. It's hard to ignore the compute and I/O capability of up to 64 cores and 128 threads on the latest Zen 2 architecture, quadruple the floating-point performance, and bountiful PCIe 4.0 connectivity. Epyc 7002 Series, therefore, is a well-rounded solution for the mainstream datacenter space across eclectic applications.

Benchmarks show that a top-of-the-line 2P Epyc 7742 system sets new records in standardised server benchmarks, leaving a more expensive, still-high-performance 2P Intel Xeon Platinum 8280 far in the rear-view mirror for heavily-threaded applications. It is doubtful that, anytime soon, Intel will be able to match the sheer oomph of Epyc 7002 Series, once one factors price and overall TCO in. 

It's not quite as simple as that, however. If IT-buying decisions were based solely off integer and floating-point benchmarks, or projected TCO savings, then AMD would be hitting a home run right now, out of the park. It offers more for less money, continuing a theme that's pervasive in the desktop space. But that's not entirely how the server world works. Intel continues to hold a commanding position in this space, with deep, many-year relationships with key datacenter players and server vendors, and it presents customers with hardware as one part of the server solution rather than focus on pure silicon.

To that end, with thousands of engineers fine-tuning myriad datacenter applications to run as efficiently as possible on Intel Architecture, the decision to switch a server installation from Xeon to Epyc isn't, in the real world, immediately straightforward. Things move slow, usually on a multi-year cycle. Rival Intel will also claim that the latest Xeon processors are better-suited for emerging workloads such as AI, and that related products such as Optane memory and upcoming accelerators alter the overall performance dynamic more into its favour. Then there's the just-announced Cooper Lake architecture that promises up to 56 socketed cores, multi-chip implementation, and greater memory bandwidth - leading to Xeon and Epyc becoming more alike than ever before.

Both companies will undoubtedly ramp up the PR and marketing blitz to promote their latest solutions, but, over the next year or so, there's little to stop AMD from gaining more market- and mind-share in the server space with the release of a wide number of scalable Epyc 7002 Series processors. The question isn't one of whether it will; it is of how large those gains will be. It will therefore be interesting to see how many design wins AMD gets from the serious server players such as Dell EMC, Lenovo and HP Enterprise, and how many big tech companies publicly announce implementations across HPC, cloud, and enterprise environments.

The bottom line is that AMD had to build interest with first-generation Epyc. It did so by getting into customer conversations that had been elusive for 10 years. Epyc 7002 Series released today, effectively changes interest for full-on momentum, and takes liberal advantage of Intel's missed CPU schedule to put the performance and TCO pedal firmly to the metal. It ought to be the catalyst for AMD growing its server CPU business to double-digit levels in the near future.

Any company upgrading three-to-five-year-old servers needs to pay AMD Epyc 7002 Series serious attention.

The Good
The Bad
Stunning performance
Expansive I/O capability
Offers excellent TCO
SP3 socket compatibility
Changes server hardware landscape
No AI-specific optimisations

AMD Epyc 7742 2P




At HEXUS, we invite the companies whose products we test to comment on our articles. If any company representatives for the products reviewed choose to respond, we'll publish their commentary here verbatim.

HEXUS Forums :: 11 Comments

Login with Forum Account

Don't have an account? Register today!
Wish cpu's could get away from the old x86 leftover and create something new. Why not just have a x64 default. Or x128.
Wish cpu's could get away from the old x86 leftover and create something new. Why not just have a x64 default. Or x128.

Simply because it would break so much stuff…..
But yes - I agree!
Wish cpu's could get away from the old x86 leftover and create something new. Why not just have a x64 default. Or x128.

I think that is one of the reasons there is so much interest in RISC-V.

Making a cpu go this fast is hard enough that dealing with the historical baggage of x86 doesn't really add that much to the problem. Previous estimates were about 5% of die size. Decoding AMD64 instructions is already pathologically hard so adding the old x86 junk in there doesn't really matter. Using a modern ISA like RISC-V would make lots of the decode logic go away, and maybe get a small performance improvement, but not much. It would probably win most if you went for a lower performance CPU where the core size would benefit a lot more, and then compared to AMD64 you could cram a lot more high-ish performance cores on a cheap-ish die.

But there is truth to what you say. I presume modern PCs still honor the A20 gate from 286 days, adding a slight delay to the addressing path and limiting clock speeds, but I doubt more than a handful of people would use that for their DOS himem.sys driver these days.

Simply because it would break so much stuff…..
But yes - I agree!

From the last time I tried running the original Civilization game (which I think expected 16 bit Windows 3.1) on a modern PC, it's already broken.
Bet Intel PR are having a wonderful morning
Bet Intel PR are having a wonderful morning

Very true.

I really do wonder what slanderous trash they come up with this time.

One day, they're bound to surprise us all and actually be honest…. or say nothing, surely it has to happen?