vacancies advertise contact news tip The Vault
facebook rss twitter

Nvidia GeForce RTX 3080 Founders Edition examined

by Tarinder Sandhu on 10 September 2020, 14:01

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qaen6n

Add to My Vault: x

Nvidia announced next-generation GeForce RTX 3070, RTX 3080 and RTX 3090 graphics cards last week. Based on the Ampere architecture already powering the datacentre-optimised A100, bringing the latest and greatest into the gaming fold was the next logical step.

You won't have long to wait for Ampere in GeForce form, either, as the RTX 3080 is available September 17, followed by the range-topping 3090 a week later, and 3070 at some point in October. Understanding we're closest to the full performance reveal of the 3080, on September 14, Nvidia is stepping aboard the hype train by allowing reviewers to showcase the Founders Edition (FE) card.

Nvidia has set the benchmark for industrial design with a long line of FE cards. Aesthetically impressive and built to high standards, it can be argued they have favoured form over function. Cooling performance has not been as robust as the best add-in cards (AIC) from the likes of Asus and MSI and they've been noisier at both idle and load. Looking to right those wrongs is the latest iteration that looks like no other.

The Nvidia GeForce RTX 3080 FE's design is in response to those criticisms and the need to pull away substantially more heat than the incumbent range. Though the underlying Ampere technology is fabricated on an 8nm process from Samsung, compared to TSMC's 12nm present on RTX 20-series, the sheer size of the new GA102 die - 28.3bn transistors - and the voltage/frequency curve scaling nicely higher up the wattage spectrum, encourages Nvidia to increase this card's TDP from 250W on RTX 2080 Ti to 320W. That's a 28 percent increase for a smaller, more energy-efficient process!

Having to deal with the extra power and increased transistor density is not a job for RTX 20-series coolers. Nvidia has therefore gone back to the drawing board and built a cooling solution that, it claims, is not only quieter than the current high-end FE solutions but cooler too, which is impressive given the extra 70W it has to deal with.

Measuring 285mm long, 112mm high and taking up a full two slots, it is the cooler, not the PCB, that takes up the space. Nvidia's thermal and noise strategy is to remove as much of the PCB as possible, leaving the most room for case airflow to circulate through the cooling apparatus, through the voluminous fins and out the top or rear.

Here's the card from the other side. Note the second spinner on the back of the card? It's a flow-through pull fan drawing air in from the fin-stack and pushing it towards the CPU and rear fan exhaust on your chassis. It's important to understand there is no PCB interfering with the airflow on this side; it's all fan, heatpipes, and fins.

The other fan, also 80mm, closest to the I/O bracket, can run at different speeds depending upon temperature and load. Both switch off at low loads. This one pushes air on to the cooling and out the rear. Nvidia still employs a hybrid vapour chamber and array of heatpipes to move the heat away to the host of aluminium stacks that constitute the bulk of the card.

The unique design means this card does not use what is considered to be the reference PCB. Instead, Nvidia is keeping this design for itself and seeding partners with a full-width PCB on which they can mount their own coolers.

The GeForce logo is moved over to one side. It's backlit in white and is in keeping with the minimalist design. The same lighting creates a white V-shape by the nearest fan. Note the power connector in the middle? This is where the PCB actually ends. The need to remove as much of it as possible, for throughflow reasons discussed above, means that Nvidia wants to minimise protrusions that eat up valuable space and take away from cooling potential.

Such thinking forces removal of the traditional dual 8-pin PCIe connectors which are replaced by a custom 12-pin connector that's tiny. Nvidia supplies the requisite 12-pin to dual 8-pin connector in the box, and also recommends a minimum 750W PSU for the RTX 3080 and 3090. Partner cards, of course, won't need such a connector because they will use reference PCBs on their aftermarket designs.

Built like the proverbial tank, the RTX 3080 FE weighs in at 1,358g but doesn't sag because of its rigidity. Subjectively, it looks attractive sat in any modern build, augmented by a wraparound heatsink design and black-and-grey colour scheme. Eagle-eyed readers will notice that going down this route leaves no space for SLI connectors via NV-Link; that feature is limited to the champion RTX 3090.

Nvidia upgrades HDMI to v2.1, offering up to 8K60 12-bit HDR from a single cable with Display Stream Compression active. The same output can be achieved over DisplayPort 1.4a and DSC. Adding a second DP cable increases output to 8K120. GeForce Experience can now capture up to 8K 30 FPS, and support HDR capture at all resolutions, too.

The only question remaining is performance. It's something we can't share at the moment, but do head back on September 14 to find out if Ampere's arsenal is as impressive as Nvidia contends.



HEXUS Forums :: 48 Comments

Login with Forum Account

Don't have an account? Register today!
That 12 pin placement is really going to screw with cable management and my OCD!
Be interesting to see what the 3080 does on 250w, given its meant to be smaller and more efficient than 2080, add to this something I read about the Ampere CUDA cores not being as good as Turings, are we actually looking at something that is as impressive as we've been lead to believe, or are NVidia having the give it more power to get decent clocks because the 8nm process isn't as good?

Something somewhere, doesn't quite add up…
'[GSV
Trig;4250302'] or are NVidia having the give it more power to get decent clocks because the 8nm process isn't as good?
A bit of that, plus having to use bigger chips to counter AMD.
'[GSV
Trig;4250302']Be interesting to see what the 3080 does on 250w, given its meant to be smaller and more efficient than 2080, add to this something I read about the Ampere CUDA cores not being as good as Turings, are we actually looking at something that is as impressive as we've been lead to believe, or are NVidia having the give it more power to get decent clocks because the 8nm process isn't as good?

Something somewhere, doesn't quite add up…

If you purely went by watts per core between the RTX 2080 FE and the RTX 3080 FE, completely ignoring any other changes (including process change, architecture change etc, differences in base clocks and boost clocks between generation), it does look like the cores are more energy efficient. This is a completely unscientific comparison though.


RTX 2080 FE - 250W / 2944 = 0.084
RTX 3080 FE - 320W / 8704 = 0.036
Iota
If you purely went by watts per core between the RTX 2080 FE and the RTX 3080 FE, completely ignoring any other changes (including process change, architecture change etc, differences in base clocks and boost clocks between generation), it does look like the cores are more energy efficient. This is a completely unscientific comparison though.


RTX 2080 FE - 250W / 2944 = 0.084
RTX 3080 FE - 320W / 8704 = 0.036

I'm sure I read somewhere that the cores are actually less powerful in Ampere though, but there's more of them so that unscientific comparison covers nothing but actual power requirements.
Wonder how the 3080 would cope if you did set its limit to 250 though, that would show us where and why that limits been lifted to 320, perhaps NVidia have struggled to get Ampere where it is, yes its good, but will it be better when we get to the Super/Ti models and the process is a bit more mature or coming from someone else….