facebook rss twitter

Review: NVIDIA GeForce GTX 690

by Tarinder Sandhu on 3 May 2012, 14:02 4.5


Quick Link: HEXUS.net/qabf4v

Add to My Vault: x

I am become Death, the destroyer of GPUs

NVIDIA pulled out all the stops when it released the high-end GeForce GTX 680 graphics card a couple of months' ago. Well-designed, fast and relatively cool, our conclusion said it all: "the GeForce GTX 680 (Kepler GK104) is the Fermi architecture polished to a mirror finish. It is the best high-end GPU available right now, dethroning the AMD Radeon HD 7970 in the process."

And while we continue to believe that the GTX 680 is the best consumer GPU in the business, one can successfully argue that it isn't the fastest graphics card in the world. These two apparently contradictory statements are reconciled by understanding that a graphics card can house two GPUs, just as with the GeForce GTX 590 and Radeon HD 6990, and two 'last-gen' GPUs have enough muscle to sneak past the GTX 680 when evaluated over our six real-world games.

The engineering folk at NVIDIA have the world's fastest consumer GPU, granted, but they now covet the halo-inducing title of world's fastest graphics card, especially as rival AMD has yet to release a dual-GPU version of its Radeon HD 7900-series line. So what's the recipe for graphics-card domination? Well, take a GeForce GTX 680 GPU, grab another, place them both on one board, link them together via a high-speed PCIe 3.0 bridge/switch, slap on a super-efficient cooler, and then present this concoction with some aesthetic flair. Easy, huh?

GeForce GTX 690 4GB

NVIDIA boss, Jen-Hsun Huang, presented this very graphics abomination last Saturday at the GeForce LAN Gaming Festival in Shanghai, China. Known as the GeForce GTX 690 4GB, it would be egregiously remiss of us not to have a closer look.

Let's start off with the practicalities. Today's high-end GPUs tend to pull, on average, no more than 250W when run at the speeds prescribed by the manufacturer. This figure has remained fairly constant for a number of years now, as it strikes a solid balance between performance, power requirements, noise and cooling. Dual-GPU cards tend to have larger PCBs and custom, enhanced cooling, such that 350W or so can be dissipated. You can see an obvious problem for engineers looking to launch dual-GPU monsters; the maximum board power isn't double that of a single GPU's, meaning it's rare to place two premier GPUs on one board. This is why the dual-GPU card often reduce frequencies and perhaps snip the architecture. Let's see how much concession NVIDIA has made this time around.

GPU GeForce GTX 690 (4,096MB) GeForce GTX 680 (2,048MB) GeForce GTX 590 (3,072MB) GeForce GTX 580 (1,536MB)
DX API 11.1 11.1 11 11
Process 28nm 28nm 40nm 40nm
Transistors 3.54bn x 2 3.54bn 3.0bn x 2 3.0bn
Die Size 294mm² x 2 294mm² 520mm² x 2 520mm²
Processors 1,536 x 2 1,536 512 x 2 512
Texture Units 128 x 2 128 64 x 2 64
ROP Units 32 x 2 32 48 x 2 48
GPU Clock (MHz) 915 (1,019) 1,006 (1,058) 607 772
Shader Clock (MHz) 915 (1,019) 1,006 (1,058) 1,215 1,544
GFLOPS 5,621 3,090 2,488 1,581
Memory Clock (MHz) 6,008 6,008 3,414 4,008
Memory Bus (bits) 256 x 2 256 384 x 2 384
Max bandwidth (GB/s) 192.2 x 2 192.2 163.9 x 2 192.4
Power Connectors 8+8 6+6 8+8 8+6
TDP (watts) 300 195 365 244
GFLOPS per watt 18.74 15.84 6.82 6.32
Release MSRP $999 $499 $699 $499

The Generations Game

The table shows the best single- and dual-GPU cards from the present and previous GeForce generations. GTX 590 uses a couple of GTX 580 GPUs with the complete architecture intact. However, the aforementioned need to keep within a sensible power budget results in the dual-GPU card clocking in at fundamentally lower speeds, represented by the core operating over 20 per cent slower and the card's memory chugging along burdened with a 15 per cent frequency deficit. Factor in theoretical performance increases, based on the table, plus non-perfect SLI scaling and the GTX 590 is just over 50 per cent quicker than the GTX 580.

Moving on over to the show pony for today, the GTX 690 also keeps the single-GPU's topology intact; there are no architecture trade-offs when compared to GTX 680. The key difference this time around is that NVIDIA has a greater power budget to play with - the GTX 680 consumes less than 200W at full chat - and the nuances of GPU Boost lend a helping hand. Armed with a 300W TDP, which is actually conservative for a dual-GPU card, the GTX 690's core speed is around 10 per cent lower than a GTX 680's but the memory frequency remains the same. However, the 690's GPU Boost is more aggressive, according to NVIDIA, and the card jumps to, on average, 1,019MHz when gaming. Assuming this is true, the GTX 690 should perform to within a few percent of a couple of GTX 680s in two-card SLI.

Think of the GeForce GTX 690 as a single board that represents, for all intents and purposes, two GTX 680s running in tandem. NVIDIA has picked the very best-yielding Kepler cores - the ones that can run at high speeds with low voltage - and engineered them on to a single PCB. Such presumed performance hegemony arrives with a $999 (£825) street price, or about the same as two GTX 680s, and NVIDIA has no inclination or pressure to reduce pricing until AMD has a dual-GPU response out in the wild.