facebook rss twitter

AMD Multiuser GPU virtualization demonstrated at VMworld 2015

by Mark Tyson on 1 September 2015, 11:32

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qact7e

Add to My Vault: x

AMD has announced the AMD Multiuser GPU at VMWorld 2015 in San Francisco. This is the "world's first hardware-based GPU virtualization solution," claims AMD. The system allows up to 15 users to share a single AMD GPU. It is based upon the open industry standard SR-IOV and is designed to "overcome the limitations of software-based virtualization," offered by other vendors.

AMD Multiuser GPU is said to enable "consistent, predictable, and secure performance from your virtualized workstation with the world’s first hardware-based virtualized GPU solution." Users sharing the GPUs will enjoy workstation-class experiences with full ISV certifications and local desktop-like performance. With AMD's hardware virtualization users won't be limited to what they do in the virtualised environment, as they have full access to native AMD display drivers for OpenGL, DirectX, and OpenCL acceleration.

Thanks to its hardware-based model, AMD says that its system is much more secure than rival virtualization systems. It claims that it is "extremely difficult for a hacker to break in at the hardware level," but software-based virtualization can be exposed or breached to access guest VMs in an unauthorised way.

As mentioned in the intro, a single GPU can handle 15 client machine users. For more intensive use, like in graphics design obviously fewer users are supported per GPU. AMD made a chart of typical scenarios, below. In any use case one user can't hog all the GPU.

AMD says that Multiuser GPU is easy to set up. A compact and efficient Hypervisor driver provides the UI to implement and configure the AMD Multiuser GPU. It works in environments using VMWare vSphere/ESXi 5.5 and up, with support for remote protocols such as Horizon View, Citrix Xen Desktop, Teradici Workstation Host Software, and others. AMD Multiuser GPU is "coming soon".

While AMD doesn't mention the rival Nvidia GRID technology directly in its references to 'other software-based virtualization' systems at least one chart does, I've embedded it below. The latest Nvidia GRID 2.0 virtualization tech doubled its maximum users per server so it might compare better now.



HEXUS Forums :: 7 Comments

Login with Forum Account

Don't have an account? Register today!
This is for backend computation right?

We're no further towards a single PC driving an entire office (or part of an office) directly - e.g., 8/16 monitors for 8 users in a single system, 8 keyboards/mice into the system, a 16/32 core CPU, a redundant, reliable storage system and 64GB-256GB RAM.

Have one of these systems per “office desk unit (8 users)” in an office, and drop hardware support costs massively…
sykobee
This is for backend computation right?

We're no further towards a single PC driving an entire office (or part of an office) directly - e.g., 8/16 monitors for 8 users in a single system, 8 keyboards/mice into the system, a 16/32 core CPU, a redundant, reliable storage system and 64GB-256GB RAM.

Have one of these systems per “office desk unit (8 users)” in an office, and drop hardware support costs massively…

This is (along with NVIDIA GRID) designed primarily for VDI/SBC, so a single server can support multiple GPU-accelerated users. Having a single PC acting as a single point of failure for that number of users would be pretty unwise and impractical imo. Giving everyone a cheap zero/thin client and getting them to connect over the LAN to a set of VDI/SBC servers means that everyone has a resilient session (and keeps everything centralised for security etc.). If it's for a small office then you could always drop a pair of tower servers in it and do the same thing. For most businesses though, hardware costs aren't usually the killer, it's software licensing and keeping compliance.
sykobee
This is for backend computation right?
Wrong, OpenCL is part of the benefit, but the article does say "they have full access to native AMD display drivers for OpenGL, DirectX" and if their target market is CAD/CAM then it's graphics horsepower that's required (rather than a graphics-based number cruncher).

If you think about it this is a pretty neat play. After all it's going to be easier to justify that screaming professional graphics card costing 1,000's (or even tens of thousands) if you can say to the bean counters that it's a shared resource. But, as I.set correctly points out, the problem with that is that the software vendors seem to have traditionally penalised shared environments. And if you think domestic DRM is bad, you ain't seen nuttin' compared to the hoops needed to license some commercial software packages… :wallbash:

I'd be fascinated to understand (more research required) how this'll work in practice. So is the video signal going over DisplayPort or GBit. What really interests me though would be if this tech works it way down to the home market. The idea that I can build a real barn-burner of a PC and host virtualized Windows and Linux on it AND game in that virtual environment is something I like very, very much. :drool:
sykobee
This is for backend computation right?

We're no further towards a single PC driving an entire office (or part of an office) directly - e.g., 8/16 monitors for 8 users in a single system, 8 keyboards/mice into the system, a 16/32 core CPU, a redundant, reliable storage system and 64GB-256GB RAM.

Have one of these systems per “office desk unit (8 users)” in an office, and drop hardware support costs massively…

What you're talking about is the way offices had computers in the 70's and early 80's. Everyone had a Terminal which was basically a monitor and keyboard connected to a mainframe computer, usually running a UNIX derivitive.

During the 80's it became more cost effective to ditch the large (and very expensive) mainframe and give all office workers their own computers.

But nowadays it looks like you're correct in that the performance of hardware over software is such that you could get mainframe-like performance from a single workstation class computer. The problem is still cost though, individual computers from a business perspective are about £100ish/per(in bulk) and use about 35w/hr each.
A workstation for say 15 users would cost easily £3000+ and consume 250+w/hr, so it all boils down to - will you save enough money on energy bills long term to offset the initial cost of purchase and is the support nighmare of having 15 non-productive workers per hardware/software failure instead of 1 worth the risk?

TBH I think offices will be better sticking with the current 1 pc per person approach, it's a little more expensive in terms of energy but cheaper in initial purchase and safer in support systems.
crossy
sykobee
This is for backend computation right?
Wrong, OpenCL is part of the benefit, but the article does say "they have full access to native AMD display drivers for OpenGL, DirectX" and if their target market is CAD/CAM then it's graphics horsepower that's required (rather than a graphics-based number cruncher).

If you think about it this is a pretty neat play. After all it's going to be easier to justify that screaming professional graphics card costing 1,000's (or even tens of thousands) if you can say to the bean counters that it's a shared resource. But, as I.set correctly points out, the problem with that is that the software vendors seem to have traditionally penalised shared environments. And if you think domestic DRM is bad, you ain't seen nuttin' compared to the hoops needed to license some commercial software packages… :wallbash:

I'd be fascinated to understand (more research required) how this'll work in practice. So is the video signal going over DisplayPort or GBit. What really interests me though would be if this tech works it way down to the home market. The idea that I can build a real barn-burner of a PC and host virtualized Windows and Linux on it AND game in that virtual environment is something I like very, very much. :drool:

You're right on the commercial software compliance part. Try dealing with MS when it comes to licensing Office and Windows in a virtual environment will make you want to cry (MS do everything in their power now to lock you into SA or Office365). Then there's Adobe, AutoDesk etc. Normally I have to use something like AppSense or RES WSM.

As for in practice, the setup you want can be done. The multi-GPU cards designed for this stuff, like GRID, don't have any external display connectors, instead everything is done through the remoting protocol to the VM, like PCoIP or ICA. Only certain cards support virtualising the GPU, but a lot more GPUs are supported to be passed through the hypervisor to the virtual machine with a one-to-one PCI mapping (vSphere and XenServer both support it). Hell, one of my main demos to customers is showing playing Tomb Raider on a Win 8 VM over our corporate LAN, with the server sat in our test lab at the other end of the building. Really the limitation is latency, well, that and the relevant VMware/Citrix licensing.