vacancies advertise contact news tip The Vault
EPIC HEXUS COMPETITIONS OVER £8,000 worth of gear to be won! [x]
facebook rss twitter

NVIDIA VGX - GPU cloud computing announced

by Tarinder Sandhu on 16 May 2012, 09:33

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qabgrf

Add to My Vault: x

GPUs for the cloud

In news that could have far-reaching implications for way in which graphics technology and power are to be distributed to a wide range users in the near future, NVIDIA boss, Jen-Hsun Huang, during his keynote speech at GTC 2012, introduced what is deemed to be the world's first virtualised GPU, designed primarily with GPU-accelerated cloud computing in mind.

Let's take a step back for a second and digest the fundamentals of what NVIDIA is introducing. At its core is the idea of virtualisation, which for high-end Intel and AMD CPUs means sharing the resources of the processor across a number of independent operating systems. Driving up efficiency in the data centre, each operating system - read virtual machine - is allocated runtime on the processor, meaning multiple clients can, and do, run off a single chip. The job of balancing all of these virtual machines and providing the correct knobs for seamless usage is down to something called a hypervisor. Virtualisation for server-grade CPUs, then, is a well-established, robust business model.

Now, coming back to NVIDIA, the massive parallel computation power of the latest GPUs makes them, in theory, ideal candidates for virtualisation - each user, connected remotely (or, in a wider parlance, the 'cloud'), could access the power of data centre-installed GPUs and run their own workload on it through a virtual desktop infrastructure (VDI). Getting to the gist of NVIDIA's announcement, it is adding a set of extensions to its newest Kepler graphics architecture that is to enable just this type of multiple-client sharing. These extensions and their management are known as VGX.

VGX extensions and hypervisor

Keeping track of multiple virtualised environments for a CPU is relatively easy. It has been mastered by companies such as Citrix through a range of industry-proven hypervisors. GPUs, on the other hand, can have thousands of cores and complex caching arrangements that work just fine in a single-client situation but are much more difficult to slice up/virtualise for multiple clients. NVIDIA's VGX gets around this by some clever arbitrating and subsequently hooks up to commercial hypervisors such as XenServer from Citrix for effective virtualisation for the GPU.

The point here, as we understand it, is to enable the latest NVIDIA GPUs to be shared across multiple users in the same way the resources of server CPUs are currently virtualised across many workplace-based PCs. Should it work as advertised, the end result is a situation where a 'cloud-based' computer - that is, an Internet-connected device located potentially anywhere in the world - is able to tap into significant CPU and GPU power that's physically housed someplace else.

Low-latency access

NVIDIA promises low-latency access to the virtualised GPUs, meaning that any remote-connected user, almost irrespective of the specifications of their system, should have a GPU-accelerated computing experience that's very similar to that of a standalone machine featuring a dedicated graphics card. While VGX is ostensibly designed for the professional market, such an approach opens up a tantalising opportunity of virtualised GPU game streaming (ahem, GeForce Grid, ahem.)

The first generation of VGX-equipped NVIDIA boards are to house four GPUs on to a PCI-Express-connected PCB. Each GPU will have 192 CUDA cores and, satiating the needs for multiple access, 4GB of memory apiece, or 16GB in total. NVIDIA reckons that each 768-core board will provide the necessary virtualised GPU gubbins for 100 virtualised PCs. Passively cooled and slotting into regular server systems, just like current Tesla cards, these Kepler boards are decidedly low-end - the GeForce GTX 680, for example, has 1,536 cores all to itself. Dipping its toe into the water with this announcement, it's reasonable to assume that more-powerful Kepler cards, imbued with this VGX GPU virtualisation trickery, are being readied for deployment.

NVIDIA's VGX marks an interest shift in the way graphics will play a part in future cloud computing. We wait to see with bated breath just how it will turn out.



HEXUS Forums :: 7 Comments

Login with Forum Account

Don't have an account? Register today!
Nice. We've already been using grid enabled GPU clusters, this should make them a bit better.

There's a lot more useful compute stuff on Kepler (esp kepler2) here:
http://www.theregister.co.uk/2012/05/15/nvidia_kepler_tesla_gpu_revealed/ [theregister.co.uk]
Interesting. We've been using Remotefx on HyperV. Good for general Aero whizzyness to keep end users happy in VDI-land, plus good for DirectX stuff like Google Earth. Lack of OpenGL support causes us headaches though for some CAD applications, so if this can work without being tied to DirectX it will be interesting.

I don't want to be a buzzkill on the cloud hype-machine, but I'm not sure this will be a 'cloudy' technology. I would expect it to be more suitable for WAN to be honest. 3D graphics are a bandwidth pig.
Good news for the likes of Onlive.
Platinum
Good news for the likes of Onlive.


Honestly I don't see how other than to make their infrastructure potentially cheaper. VGX is good news for people wanting to inject 3D into virtual machines, i.e. it is competition for VMWare's PCoverIP and MS's RemoteFX. We're talking ultrafast LAN topology SAN hosted environments here. Whether and how it in any way allows rapid delivery to the end user 'in the cloud' is another issue entirely.
kalniel
Nice. We've already been using grid enabled GPU clusters, this should make them a bit better.

There's a lot more useful compute stuff on Kepler (esp kepler2) here:
http://www.theregister.co.uk/2012/05/15/nvidia_kepler_tesla_gpu_revealed/ [theregister.co.uk]

Interesting point about PCI-E 3.0 and Intel Xeon E5 based systems in that article,but AFAIK ORNL is going to use Kepler based cards(probably the K20) in Titan. This is an Interlagos based system and AFAIK it does not use PCI-E 3.0(I could missing something here though).