facebook rss twitter

Review: NVIDIA's GPGPU ambition coming to fruition?

by Parm Mann on 2 December 2008, 14:18

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qaqcq

Add to My Vault: x

Summary

The parallel processing capability of the GPU is clearly a hardware feature that can, in the right scenarios, significantly boost all-round system performance. But we already knew that, right?

NVIDIA continues to advocate the non-gaming credentials of its GeForce 8, 9, and GTX 200-series graphics cards, and - as shown in our results - the added value is clearly visible when you're able to find an application that makes use of CUDA acceleration.

But herein lies the problem. Non-gaming mainstream applications that tap into the power of a GeForce GPU remain few and far between. Unless you're using CUDA-accelerated applications such as Adobe's Creative Suite 4 or playing a PhysX-enabled game, you'll be unable to realise the potential added value of your graphics card.

It's worth noting, also, that these CUDA-accelerated apps don't come free of charge. They may offer greater performance, but they do so at a cost. What NVIDIA could really use is a set of freeware CUDA-accelerated applications - that, ultimately, could make CUDA usage a little more widespread.

There's no denying that CUDA-acceleration can and should be used to make the most of the raw GPU power so many systems now contain - doing so offers performance gains that can't be ignored. Unfortunately, it isn't quite that simple, and getting programmers to adopt the CUDA development tools remains an uphill struggle.

Then there's AMD and the Radeon series of graphics cards. AMD hasn't yet began vehemently hammering home the importance of GPGPU, but it does have plans to make its GPUs do more. Rather than create a quasi-proprietary programming environment of its own - well, CUDA isn't per se, but it only works on select NVIDIA cards at present - AMD's playing the waiting game and holding out for the arrival of Microsoft's Compute Shader in the forthcoming DirectX 11, as well as seeing how OpenCL plays out.

DirectX 11's Compute Shader will allow developers to access the computational capability of the GPU through the familiar Direct3D APIs - a Microsoft-backed method we feel will be hugely popular and could spell the end of CUDA when it arrives in late 2009 or early 2010.

General-purpose computing on graphics processing units holds plenty of promise, but it's still some way away from being readily-accessible to the everyday user.



HEXUS Forums :: 3 Comments

Login with Forum Account

Don't have an account? Register today!
AMD are showing a similar kind of angle to this apparently however CUDA has had slighly longer to mature and get its features into products.

http://www.custompc.co.uk/news/605181/amd-to-integrate-stream-gpgpu-features-into-catalyst-812.html
I would be interested to see this compared to an actual physx card.
One would imagine the far greater power of the GTX 260 compared to the 8800GT would far outweigh any effect of unoptimised software. I mean, it costs more than twice as much and has nearly twice as many cores.

Hmmm..