vacancies advertise contact news tip The Vault
facebook rss twitter

AMD Radeon Instinct MI60 and MI50 accelerators announced

by Mark Tyson on 7 November 2018, 10:01

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qadzd4

Add to My Vault: x

AMD held its Next Horizon event in San Francisco yesterday evening and you might already have digested the new concerning the 7nm Zen 2 based Epyc processors, written up by the HEXUS Editor. However, there were other interesting announcements by AMD at Next Horizon; it said that its Epyc Processors had become available on Amazon Web Services, and it revealed the world’s first 7nm data centre GPUs, based on the Vega architecture. We shall provide more info on the GPUs, the new AMD Radeon Instinct MI60 and MI50 accelerators, below.

“The fusion of human instinct and machine intelligence is here.”

AMD sums up its new 7nm GPU accelerator cards as follows - “AMD Radeon Instinct MI60 and MI50 accelerators with supercharged compute performance, high-speed connectivity, fast memory bandwidth and updated ROCm open software platform power the most demanding deep learning, HPC, cloud and rendering applications”.

Designed for data centre and running on Linux only, these GPUs provide the compute performance required for next-generation deep learning, HPC, cloud computing and rendering applications. For deep learning the flexible mixed-precision FP16, FP32 and INT4/INT8 capabilities of these GPUs will prove attractive thinks AMD. A particular claim to fame of the beefier AMD Radeon Instinct MI60 is that it is the world’s fastest double precision PCIe 4.0 capable accelerator, delivering up to 7.4 TFLOPS peak FP64 performance (and the MI50 can achieve up to 6.7 TFLOPS FP64 peak performance). This performance makes the new GPU accelerators an efficient, cost-effective solution for a variety of deep learning workloads. Furthermore, AMD has enabled high reuse in Virtual Desktop Infrastructure (VDI), Desktop-as-a-Service (DaaS) and cloud environments.

If you wish to virtualisation technology to share these GPU resources, AMD is keen to point out that its MxGPU Technology, a hardware-based GPU virtualization solution, hardens such systems against hackers, delivering an extra level of security for virtualised cloud deployments.

Fast data transfer in the target applications of these products is essential and AMD states that the Two Infinity Fabric Links per GPU deliver “up to 200GB/s of peer-to-peer bandwidth – up to 6X faster than PCIe 3.0 alone”. Up to 4 GPUs can be configured into a hive ring (2 hives in 8 GPU servers), adds AMD.

An Infinity Fabric ring with four Radeon Instinct MI50 cards

The AMD Radeon Instinct MI60 and MI50 accelerators come packing 32GB and 16GB of HBM2 ECC memory, respectively. AMD says that both GPU cards provide full-chip ECC and Reliability, Accessibility and Serviceability (RAS) technologies which are important to HPC deployments.

In terms of supporting software, AMD has updated its ROCm open software platform to ROCm 2.0. This new software supports 64-bit Linux operating systems including CentOS, RHEL and Ubuntu, includes updated libraries, and support for the latest deep learning frameworks such as TensorFlow 1.11, PyTorch (Caffe2), and others.

The MI60 will begin to ship to data centre customers by the end of this year, alongside the ROCm 2.0 release. Meanwhile, the MI50 won’t become available until the end of Q1 2019. Pricing indications were not provided by AMD.



HEXUS Forums :: 6 Comments

Login with Forum Account

Don't have an account? Register today!
How this compares with Vega 64 (in TFLOPS)?
Well Vega 64 has 25.3 TFLOPS (FP16) and this one has 29.5 TFLOPS FP16, so 16% improvement?
or 12.7 TFLOPS (FP32) vs 14.7 for the new one so 16% better

These are not great numbers or I am doing something wrong?


*cleans self up*

Ahem, that's pretty impressive. Liking the Hive system with the Infinity Fabric ring, would like to know more.
darcotech
How this compares with Vega 64 (in TFLOPS)?
Well Vega 64 has 25.3 TFLOPS (FP16) and this one has 29.5 TFLOPS FP16, so 16% improvement?
or 12.7 TFLOPS (FP32) vs 14.7 for the new one so 16% better

These are not great numbers or I am doing something wrong?

Think of this card as 14NM Vega but with expanded INT4,INT8 and FP64 throughput and 4 stacks of HBM2 instead of 2:

https://www.anandtech.com/show/13562/amd-announces-radeon-instinct-mi60-mi50-accelerators-powered-by-7nm-vega

With respect to accelerator features, 7nm Vega and the resulting MI60 & MI50 cards differentiates itself from the previous Vega 10-powered MI25 in a few key areas. 7nm Vega brings support for half-rate double precision – up from 1/16th rate – and AMD is supporting new low precision data types as well. These INT8 and INT4 instructions are especially useful for machine learning inferencing, where high precision isn’t necessary, with AMD able to get up to 4x the perf of an FP16/INT16 data type when using the smallest INT4 data type. However it’s not clear from AMD’s presentation how flexible these new data types are – and with what instructions they can be used – which will be important for understanding the full capabilities of the new GPU. All told, AMD is claiming a peak throughput of 7.4 TFLOPS FP64, 14.7 TFLOPS FP32, and 118 TOPS for INT4.

It probably still has 4096 compute cores as before,so most of the FP32 TFLOPs increase is probably through better IPC,higher clockspeeds,etc.
IF links on GPU. Come on AMD, roll through the business products, earn some money, and give us a viable IF m-GPU consumer solution on 7nm which smokes RTX for less outlay.
I think that the previous gen stuff was already well optimised for FP16 and FP32 throughput so the selling point here is the power consumption and the new feature set as well as the reduced cost of ownership.

There is a big issue about CUDA being like herpes in this arena and so the overall cost of switching is a lot more than just the cost of the hardware. Nvidia also have some other software advantage from what I can tell which is likely a function of market share.

Nvidia have however shot themselves in the foot. Anyone checked out the latest Nvidia drivers T&Cs for consumer GPUs? The Licence Agreement stops them being used in datacentres - something which is open to defintion and could really just say “not for commercial number crunching”. This may put off a lot of small players and turn them to AMD. At which case the software development rate will increase massively as it's the smaller companies that actually innovate whereas Nvidia… well… let's just say their development methods are a little old school.