facebook rss twitter

AMD Radeon 300-series explored

by Tarinder Sandhu on 18 June 2015, 13:00

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qacsdz

Add to My Vault: x

AMD is taking a two-pronged approach for its discrete desktop graphics portfolio in the first half of this year. The first action sees the company release five 'new' GPUs that together constitute the Radeon R7 and R9 300-series family. The word new is used in a loose sense because the underlying technology for these five GPUs - R9 390X, R9 390, R9 380, R7 370 and R7 360 - is already in the market and sold under the Rx 200-series banner.

So is this a case of AMD taking rebranding to a whole new level by renaming an entire family? The answer is a mixture of yes and no, and we will explain why. Let's take the trio of R9 cards first and see how they stack up.

The new Radeon R9 range

 

R9 390X

R9 290X

R9 390

R9 290

R9 380

R9 285

Launch date
Jun 2015
Oct 2013
Jun 2015
Nov 2013
Jun 2015
Sep 2014
GCN version
1.1
1.1
1.1
1.1
1.2
1.2
DX support
12
12
12
12
12
12
Process (nm)
28
28
28
28
28
28
Transistors (mn)
6,200
6,200
6,200
6,200
5,000
5,000
Approx Die Size (mm²)
438
438
438
438
359
359
Full implementation of die
Yes
Yes
No
No
No
No
Processors
2,816
2,816
2,560
2,560
1,792
1,792
Texture Units
176
176
160
160
112
112
ROP Units
64
64
64
64
32
32
Peak GPU Clock/Boost (MHz)
1,050
1,000
1,000
947
970
918
Peak GFLOPS (SP)
5,914
5,632
5,120
4,849
3,476
3,290
Peak GFLOPS (DP)
739
704
640
606
435
411
Memory Clock (MHz)
6,000
5,000
6,000
5,000
5,700
5,500
Memory Bus (bits)
512
512
512
512
256
256
Max bandwidth (GB/s)
384
320
384
320
182.4
176
Default memory size (MB)
8,192
4,096
8,192
4,096
4,096
2,048
Power Connectors
8+6-pin
8+6-pin
8+6-pin
8+6-pin
6+6-pin
6+6-pin
TDP (watts)
275
290
275
275
190
190
GFLOPS per watt
21.50
19.42
18.61
17.63
18.30
17.32
Current price (Newegg)
$429
$329
$349
$280
$249
$199
Frame-rate control*
Yes
No
Yes
No
Yes
No

Radeon R9 390X vs. Radeon R9 290X

As is now patently obvious, the Radeon R9 390X is not the much-rumoured Fiji-based card equipped with HBM memory - that will come later. 390X, however, uses the Hawaii XT underpinnings of the 290X and sprinkles in marginal improvements along the way.

GPU technology is an iterative process whereby minor improvements are introduced over time. The 18 months since the launch of the Hawaii core has enabled AMD to improve the ASIC design by introducing microcode enhancements, thermal benefits arising from a more-efficient manufacturing process and so on and so forth. The R9 390X, from a hardware point of view, is 290X improved.

The default core frequency is increased by five per cent, from 1,000MHz to 1,050MHz and the widespread availability of faster SK hynix GDDR5 memory means the GPU is outfitted with 6Gbps modules instead of 5Gbps - a 20 per cent increase in bandwidth. And in an effort to futureproof the card as much as possible, the default memory configuration doubles up to 8GB.

Summarising the hardware simply, 390X enjoys a small uptick in core frequency, lots more bandwidth, double the framebuffer while there's a small drop in total average board power - 290W to 275W.

It can be argued that 390X is in existence already, with partners such as Sapphire having 8GB-equipped, overclocked R9 290X cards in the wild. 390X merely sets a new base standard for the Hawaii XT GPU. These improvements, AMD says, are enough to provide an extra 10 per cent in-game performance over a standard 290X and, more tellingly, according to internal benchmarks, provide the R9 390X with enough ammunition to outgun the rival GeForce GTX 980 graphics card.

And it's this comparison with GTX 980 that AMD is most keen to shout about. Said Nvidia GPU currently retails for $499, so while the R9 390X is more expensive than the 290X it replaces, AMD believes it still offers value for PC users looking to upgrade their three-year-old card to something shiny. We reckon that anyone owning an R9 290/290X-class of card need not bother.

Radeon R9 390 vs. Radeon R9 290

A very similar tack is followed by the R9 390. Keeping the same basic architecture topology - shaders, texture units, ROPs, etc. - means that hardware differentiation takes place on frequency and framebuffer size. This new GPU sees similar increases in core and memory clocks, and just like the 390X, the standard framebuffer is doubled to 8GB. We reckon a standard R9 390 will benchmark at about the same level as the R9 290X.

This extra spurt of graphical horsepower is enough, AMD indicates, to outdistance the GeForce GTX 970 while getting close to the GTX 980. The extra memory bandwidth and, to some extent, the size are key drivers of performance at higher resolutions and image-quality settings. AMD is banking on improvements to these two facets being enough to make the R9 390 compelling in its own right.

Comparison against the cheaper, end-of-line R9 290 and 290X are inevitable. Those GPUs are being phased out immediately. AMD's add-in board partners have indicated to us that there's little stock left of 200-series parts in the channel, so if you want a bargain 290/290X then now is a good a time as any.

Radeon R9 380 vs. Radeon R9 285

Why spoil a winning 'rebrand' formula? AMD retires the Tahiti-based R9 280 and 280X - the same GPU that can be traced all the way back to December 2011's Radeon HD 7970 - and moves forward with the newer Tonga-based design. Radeon R9 380 is the R9 285 reimagined. We see evidence of the same frequency upticks, though the memory performance improvement isn't nearly quite as good.

AMD's partners will be retailing both 2GB- and 4GB-equipped R9 380 boards, with the former having a slower memory speed of 5.5GHz. We expect to see the smallest generation-to-generation improvement for this particular model.

The Radeon R7 range

 

R7 370

R7 265

R7 360

R7 260

Launch date
Jun 2015
Mar 2014
Jun 2015
Oct 2013
GCN version
1.0
1.0
1.1
1.1
DX support
12
12
12
12
Process (nm)
28
28
28
28
Transistors (mn)
2,800
2,800
2,080
2,080
Approx Die Size (mm²)
212
212
160
160
Full implementation of die
No
No
No
No
Processors
1,024
1,024
768
768
Texture Units
64
64
48
48
ROP Units
32
32
16
16
Peak GPU Clock/Boost (MHz)
975
925
1,050
1,000
Peak GFLOPS (SP)
1,997
1,894
1,613
1,536
Memory Clock (MHz)
5,600
5,600
6,500
6,000
Memory Bus (bits)
256
256
128
128
Max bandwidth (GB/s)
179.2
179.2
104
96
Default memory size (MB)
2,048/4,096
2,048
2,048
1,024
Power Connectors
6-pin
6-pin
6-pin
6-pin
TDP (watts)
110
150
100
95
GFLOPS per watt
18.15
12.63
16.13
16.17
Current price (Newegg)
?
$149
?
$109
Frame-rate control*
Yes
No
Yes
No

Radeon R7 370 vs. Radeon R7 265

AMD chooses to update the R7 265 to R7 370 status. That GPU is not the full implementation of the Curacao die - there's a model with 1,280 shaders and consequently more performance. Whatever the case, the R9 370 is another speed-bumped model that, we reckon, will benchmark just a few per cent higher than its predecessor. There are no changes to composition of the back end, either in frequency or hardware, so the R9 370 represents the smallest improvement possible.

R9 370 uses the oldest version of the GCN architecture, v1.0, and is the only new 300-series GPU without explicit support for AMD's FreeSync technology. The various 300-series cards span three GCN generations - 1.0, 1.1 and 1.2 - and each have a different IP-level feature-set.

Radeon R7 360 vs. Radeon R7 260

The R7 360 uses the GCN 1.1-based Bonaire GPU as the base blueprint. The strategy is now abundantly simple: add a touch more core speed, increase the size of the default framebuffer, perhaps add some more memory mojo, and then rebrand to a new series. Users need to be careful to know they're spending their money in the most efficient way, because just like R7 370, the newer GPU is not a full implementation of that particular die.

The Bonaire XTX Radeon R7 260X is a better GPU than the R7 360 - it has more shaders and more associated performance, so readers need to keep a keen eye on generation-to-generation specs and pricing to ensure they're receiving the best bang for their buck. Like every other GPU based on GCN 1.1 or later, the R7 360 supports FreeSync.

Software sauce - frame-rate targeting control and virtual super resolution

AMD is aware that effectively rebranding a whole series, albeit with performance improvements when compared to the previous generation, is a dangerous game to play, especially for enthusiasts who pore over specifications with a fine-toothed comb. Looking to add more value, the company is introducing a number of technologies that, for now, will be limited to the 300-series.

The first is frame-rate targeting control. As the name suggests, the driver enables a specific framerate target to be achieved, and such an approach is useful if the card produces surfeit performance in easy-to-render games. There's little point in producing 150fps at full GPU chat when, say, 60fps will do. Frame-rate control dynamically reduces the frequency and power consumption of the GPU for a quieter, cooler experience.

Such technology is included for all 300-series cards but there is no genuine reason why it can't run on the older 200-series mob. AMD says it has 'validated' frame-rate control for the newer GPUs; we feel that it's deliberately not included on the previous-generation GPUs because such a feature provides some form of family differentiation. And the tech certainly isn't new; Nvidia has had it available for its GPUs for a while now.

Another take on an existing Nvidia technology is AMD's virtual super resolution. The premise here is to run the game at an internally higher resolution and downsample for monitors with lower visual acuity. Nvidia calls this dynamic super resolution and it's useful for when, again, the GPU has performance to spare, usually for older titles or online games that place only reasonable load on the card.

It's a wrap

The new AMD 300-series GPUs are based on existing technology that has been tweaked to offer slightly higher performance, be that through higher core and memory frequencies or through a larger framebuffer. These newer cards often arrive with a higher price point than the ones they're based upon, though such inter-family pricing disparity will become less of an obvious issue when stock of 200-series GPUs is fully depleted.

Pace of GPU development has slowed enough in recent years to make such across-the-range rebrands almost inevitable - you're not going to see new architectures on a yearly cadence. The enthusiast in us remains nonplussed because we're seeing technology from 2013 resurface in 2015 in another guise. That situation is not good for the consumer, add-in board partners or games developers. Boundaries need to be pushed at more than just the bleeding edge of performance, so we turn our attention to AMD's upcoming Radeon R9 Fury X with much anticipation.



HEXUS Forums :: 40 Comments

Login with Forum Account

Don't have an account? Register today!
Almost identical cards again? Seriously AMD, do you want to go out of business?
Rather a poor effort, AMD. The price hike from 290x to 390x looks outrageous.
I guess the 300 series is just filler until the Fury tech can filter down into an full range of cards and that these represent minor improvements because they blew all their budget on Fury?
To be fair to AMD, this fixes some of the odd discrepencies in their range (eg. 285 had 2gb of ram whilst 280 had 3gb) and 8Gb on top-end cards gives a nice bump over Nvidia's 4gb/6gb alternatives. With consoles featuring 8Gb combined RAM, one would tend to feel 4Gb should really be enough, but for 4k or large amounts of AA, or simply ultra-quality textures, then the extra room seems well worth having on top tier products. Shame about the pricing, though, which really needed to match previous generation to be truly competitive.

Also, not featuring GCN 1.1 top-to-bottom really is appalling. I know it hasn't caught on, but it makes a mockery of techologies like TrueAudio and Mantle when they cannot deliver technologies across the whole of their “non-budget” range after two generations.
Irien
Also, not featuring GCN 1.1 top-to-bottom really is appalling. I know it hasn't caught on, but it makes a mockery of techologies like TrueAudio and Mantle when they cannot deliver technologies across the whole of their “non-budget” range after two generations.

They've all got Mantle I think, but I agree it's bad to mix up, and especially bad to have a higher numbered card have less features than a lower numbered in the same series.

Inevitable, but unsatisfactory all the same.

That said, I am liking what they've done with the 390/X, if the price normalises then they're a good set of cards.. but internal competition from the nano will be interesting to say the least.
kalniel
They've all got Mantle I think

They've more or less discontinued Mantle, suggesting that devs just adopt DX12 instead if I recall correctly.

As for the disappointing rebrand, **** have reviewed the MSI 390x, it's almost as good as the GTX 980. It looks like the coolers and PCBs have been completely re-engineered (or at least MSI's Twin Frozr card has had this treatment)

Edit: Uh… okay I'm not allowed to mention a certain three dimensional over clocking website…