facebook rss twitter

AMD and Xilinx scalp AI inference world record

by Mark Tyson on 3 October 2018, 12:01

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qadx6a

Add to My Vault: x

AMD and Xilinx have been working together on a system for the acceleration of AI processing. The pair, who have a long history of collaboration, have an achievement to crow about as at the Xilinx Developer Forum in San Jose, California on Tuesday it was announced that a system combining AMD Epyc CPUs and Xilinx FPGAs had achieved a new world record for AI inference.

Xilinx CEO, Victor Peng, was joined on stage by AMD CTO Mark Papermaster to reveal the achievement. It was boasted that a system leveraging two AMD EPYC 7551 server CPUs and eight of the newly-announced Xilinx Alveo U250 acceleration cards had scored 30K images/sec on GoogLeNet, a widely used convolutional neural network.

As part of the respective companies’ vision regarding heterogeneous system architecture, the parties have worked to optimise drivers and tune the performance for interoperability between AMD Epyc CPUs with Xilinx FPGAs. In its press release on the collaborative tech, Xilinx says that the inference performance is “powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports numerous machine learning frameworks such as TensorFlow”. Xilinx and AMD also work closely with the CCIX Consortium on cache coherent interconnects for accelerators.

In the wake of the above announcement we can expect more similar technological achievements to come from this pair. Xilinx says that there is a strong alignment between the AMD roadmap for high-performance AMD Epyc server and graphics processors, with its own acceleration platforms across Alveo accelerator cards, as well as the upcoming TSMC 7nm Versal portfolio.



HEXUS Forums :: 10 Comments

Login with Forum Account

Don't have an account? Register today!
Well… okay, but what was the previous record? How is it defined? Single system? Is there a benchmark or is this just a continuous operation and they pulled the number as they felt like it?

Nice to know, but I find it hard to relate to anything. This article almost feels like an advert.
Ozaron
Well… okay, but what was the previous record? How is it defined? Single system? Is there a benchmark or is this just a continuous operation and they pulled the number as they felt like it?

Nice to know, but I find it hard to relate to anything. This article almost feels like an advert.

30,000 images per-second inference throughput!
MajorZod
30,000 images per-second inference throughput!

Me: “This article is lacking context”
You: *types single, quoted-from-article statistic without context*
Ozaron
Me: “This article is lacking context”
You: *types single, quoted-from-article statistic without context*

But… 30K images/sec


:p
Ozaron
Me: “This article is lacking context”
You: *types single, quoted-from-article statistic without context*

http://lmgtfy.com/?q=What+is+image+per+second+inference+throughput