facebook rss twitter

Review: Three-screen GeForce GTX 680 vs. Radeon HD 7970

by Tarinder Sandhu on 2 April 2012, 10:09

Tags: NVIDIA (NASDAQ:NVDA), AMD (NYSE:AMD)

Quick Link: HEXUS.net/qabemf

Add to My Vault: x

Power-draw, and concluding thoughts

Here's the usual system-wide power-draw graph that you've seen many times before in HEXUS graphics card reviews. We use Batman: Arkham City to derive the full-GPU load.

The difference this time is that there's an extra bar denoting the power-draw when three screens are connected and run on the desktop. The GPUs' frequency increases from the one-screen default in order to drive three panels. Radeon HD 7970's power-draw increases from 44W (one screen) to 71W (three screens). GTX 680's is better, rising marginally. This isn't a deal-maker or deal-breaker, but do bear it in mind if the plan is to run three screens on a long-term basis.

Concluding thoughts

Purchasing the very best graphics card today means choosing between the GeForce GTX 680 or Radeon HD 7970. Both cards provide excellent performance at a full-HD resolution; indeed, a strong case can be made for saying they're overkill for monitors with a native 1,920x1,080 resolution.

30in, 2,560x1,600-resolution monitors remain expensive, and while popular technology sites such as HEXUS use them for evaluating graphics cards, the real-world implications aren't clear cut. Of greater import, perhaps, is seeing how these monster GPUs perform when tasked with running three full-HD monitors for truly widescreen gaming.

Knowing that a trio of said monitors costs about the same as either of these £400 cards and that it's easy to set both cards to push pixels to an effective 5,760x1,080-resolution display, our high-quality benchmarks show that the GeForce GTX 680 has the edge over the Radeon HD 7970. The results are a little surprising given how NVIDIA has engineered the card - one would assume a 256-bit memory bus, 2GB framebuffer and comparatively-low ROP throughput would cause it to stutter at such settings.

An examination of the frame-by-frame results with the three-screen setup indicates that the two high-end GPUs generally perform in a similar fashion: framerate drop-offs occur at about the same time and the overall gaming experience feels in sync.

Yet even these multi-billion-transistor chips cannot produce the gold standard of 60fps on three screens. AMD and NVIDIA know that the noisome effects of multi-sample antialiasing are hard to overcome if the GPU is required to spit out frames every 16ms (60fps) and this is why both companies have put concerted efforts into looking at alternative fast anti-aliasing techniques such as MLAA and FXAA... but that's an article for a different day.

The bottom line is that the very best graphics cards from NVIDIA and AMD have just about enough oomph to drive three full-HD screens for high-quality, high-resolution gaming. Building a system right now with the express aim of running this kind of three-screen setup, we'd pick the GeForce GTX 680 card, though the Radeon HD 7970 runs it a fairly close second.



HEXUS Forums :: 17 Comments

Login with Forum Account

Don't have an account? Register today!
It seems most of the games cannot hit 30FPS on average with either card!

It looks like the dual GPU cards will be the only ones really capable of smooth gameplay using three monitors.
Eyefinity can be easily switched to 1, 2 or 3 screens by a simple keypress after an “add preset” which takes all of 15 seconds to set up - so yep the idle power draw is meaningless for 3 screens (although I did notice it was in the Nvidia reviewers guide). I read that it's much more fiddly with the Geforce cards, ie it has to be set manually every time. Is this true?
Deleted
Eyefinity can be easily switched to 1, 2 or 3 screens by a simple keypress after an “add preset” which takes all of 15 seconds to set up - so yep the idle power draw is meaningless for 3 screens

I don't think that's true, actually - a lot of three screen usage is for productivity purposes, during which the graphics card will, to all intents and purposes, be idle. The real question for me is whether the AMD card has a similarly high power draw when the three screens are arranged in extended desktop, or just when they're set as an eyefinity surface (i.e. an apples to apples comparison). If nvidia really do offer a significant power saving when the card is idle - i.e. for 3 screen productivity usage - that's quite a selling point to the right market.

All we need is for nv to bring that tech to the low end market (which they won't be doing in this generation, of course). A discreet card, passively cooled, with 3x HDMI and capable of driving three monitors off passive dongles, could be quite appealing…
Out of interest has anyone confirmed the following though:

http://www.techpowerup.com/162504/NVIDIA-s-New-AA-Algo-is-TXAA-Adaptive-V-Sync-and-New-3DVision-Surround-Detailed.html

“The new 3D Vision Surround is said to work in conjunction with Adaptive V-Sync to ensure the center display has higher frame-rate (since it's at the focus of your central vision), at the expense of the frame-rates of the two side displays (since they're mostly at your peripheral vision). This ensures there's a balanced, high-performance experience with multi-monitor gaming setups.”
Yeah I read about that before. Without the marketing speel it's basically saying we cut the framerate to peripheral displays to make our card look better. I think it's a pretty bad idea for many of the games which use peripheral displays as some have more movement on them than the main display (looking out the side of a car for example). But for some games it might be OK.

edit: Thanks Hexus for including the FPS/time graphs :) (might help to label the X-axis as ‘time’)