HEXUS Forums :: 16 Comments

Login with Forum Account

Don't have an account? Register today!
Posted by [GSV]Trig - Mon 15 Mar 2021 11:38
Are there plans for AMD to release a card on a decent memory bus and give us the GPU in its fully capable glory?
Posted by DanceswithUnix - Mon 15 Mar 2021 13:15
'[GSV
Trig;4287595']Are there plans for AMD to release a card on a decent memory bus and give us the GPU in its fully capable glory?

You mean one where they get rid of the Infinity Cache and use that silicon for memory controllers and more shaders to make a more expensive card that only gives more performance to miners? Not that I have heard.
Posted by habilain - Mon 15 Mar 2021 13:18
[GSV
Trig;304]Are there plans for AMD to release a card on a decent memory bus and give us the GPU in its fully capable glory?

The point of the Infinity Cache is that it doesn't need a bigger memory bus to be fully capable. Unless you want to do Etherum mining, of course.
Posted by kalniel - Mon 15 Mar 2021 13:23
DanceswithUnix
You mean one where they get rid of the Infinity Cache and use that silicon for memory controllers and more shaders to make a more expensive card that only gives more performance to miners? Not that I have heard.
I thought there were rumours of a headless mining/compute card coming?
Posted by [GSV]Trig - Mon 15 Mar 2021 13:39
habilain
The point of the Infinity Cache is that it doesn't need a bigger memory bus to be fully capable. Unless you want to do Etherum mining, of course.

I don't want to mine on it no, I want to know, given say GDDR6x or HMB2 or whatever, what will it really do…
Posted by cheesemp - Mon 15 Mar 2021 14:05
Sounds like infinity cache is good news for gamers based on these comments. More please AMD. The sooner Crypto currency dies the better IMO. (Its dreadful for the planet and is really bad for money laundering in case anyone wants to know why.)
Posted by DanceswithUnix - Mon 15 Mar 2021 14:44
'[GSV
Trig;4287618']I don't want to mine on it no, I want to know, given say GDDR6x or HMB2 or whatever, what will it really do…

AMD have invested a lot of expensive silicon in that cache. Assuming they got their modelling and sizing right when designing the thing, the answer should be sod all beyond this or they got it wrong.

But perhaps there will be an HBM part for professional use, like the one that ended up in Vega VII before.

OTOH, with faster ram they might be able to push for a slimmer 128 bit interface.
Posted by QuorTek - Mon 15 Mar 2021 17:13
The numbers is good…
Posted by habilain - Mon 15 Mar 2021 17:39
[GSV
Trig;304]
habilain
The point of the Infinity Cache is that it doesn't need a bigger memory bus to be fully capable. Unless you want to do Etherum mining, of course.

I don't want to mine on it no, I want to know, given say GDDR6x or HMB2 or whatever, what will it really do…

I sort of agreee, sort of disagree with @DancesWithUnix's analysis. There's a very high chance that removing the Infinity Cache will make performance go backwards. The RX6000 series clock higher than the competition (they boost up to 2GHz, vs 1.7GHz for a 3080, for example), and the only way I can see a GPU being fed at that frequency is via a cache. Going out to GPU memory would only work if the GPU memory was closer to the boost frequencies, which it isn't.

So best case is probably no difference, worst case is a drop in performance.
Posted by kalniel - Mon 15 Mar 2021 17:57
habilain
I sort of agreee, sort of disagree with @DancesWithUnix's analysis. There's a very high chance that removing the Infinity Cache will make performance go backwards. The RX6000 series clock higher than the competition (they boost up to 2GHz, vs 1.7GHz for a 3080, for example), and the only way I can see a GPU being fed at that frequency is via a cache. Going out to GPU memory would only work if the GPU memory was closer to the boost frequencies, which it isn't.

So best case is probably no difference, worst case is a drop in performance.

I think the argument wasn't to just remove the cache, of course that will make perf go backwards, but to do so at the same time as increasing the number of memory controllers for a wider bus (together with faster ram perhaps). But as DancesWithUnix pointed out, they wouldn't have gone down the narrow+cache route unless they had already modelled the options and found it was the best solution.
Posted by CAT-THE-FIFTH - Mon 15 Mar 2021 18:11
Some of the RT results don't look as bad as I expected!



Makes me wonder if its certain RT effects,and aspects of denoising which are affecting AMD GPUs more??
Posted by habilain - Mon 15 Mar 2021 19:00
kalniel
I think the argument wasn't to just remove the cache, of course that will make perf go backwards, but to do so at the same time as increasing the number of memory controllers for a wider bus (together with faster ram perhaps). But as DancesWithUnix pointed out, they wouldn't have gone down the narrow+cache route unless they had already modelled the options and found it was the best solution.

I suppose so. Although it might not be the best solution, but it's the best solution they have access to while optimising cost/performance i.e. HBM2+Wide bus is expensive, and I'm pretty sure nVidia have bought 100% of GDDR6X so it's not available.
Posted by [GSV]Trig - Mon 15 Mar 2021 19:04
habilain
I suppose so. Although it might not be the best solution, but it's the best solution they have access to while optimising cost/performance i.e. HBM2+Wide bus is expensive, and I'm pretty sure nVidia have bought 100% of GDDR6X so it's not available.

Yeah that's basically where I was going, if it wasn't a cost/ease of manufacture exercise then how fast would it be…
Posted by kalniel - Mon 15 Mar 2021 19:11
Yeah, I'm not sure GDDR6X is really moving much on - 19-21Gbps vs 18 per chip and a whole bunch of supply/power constraints. A 320bit bus and some top binned GDDR6 chips would be interesting from the red side since the rest of the silicon seems to respond well to extra power. But they've clearly crunched it and come up with narrow+cache instead.
Posted by DanceswithUnix - Mon 15 Mar 2021 20:56
'[GSV
Trig;4287671']Yeah that's basically where I was going, if it wasn't a cost/ease of manufacture exercise then how fast would it be…

If I'm reading this right, the 6800 has a 50% more shaders to run than the 5700XT with just 33% more bus width to feed them? So this should have plenty, and the higher end parts are the starved ones.

Wide is much better than fast so the real way to feed a GPU is HBM, but people seem to turn their nose up at HBM these days.
Posted by kompukare - Tue 16 Mar 2021 10:21
DanceswithUnix
Wide is much better than fast so the real way to feed a GPU is HBM, but people seem to turn their nose up at HBM these days.

Speaking of HBM, the Zen3 EPYC launch had an interview over at AT and there's this titbit:
We see more and more interest in using high bandwidth memory, for an on-package solution. I think you will see SKU’s in the future from a variety of companies incorporating HBM, especially for AI. That will initially be fairly specialized to be to be candid, because HBM is extremely expensive. So for most the standard DDR memory, even DDR5 memory, means that HBM is going to be confined initially to applications that are incredibly memory latency sensitive, and then you know, it’ll be interesting to how it plays out over time.
https://www.anandtech.com/show/16548/interview-with-amd-forrest-norrod-milan

Which implies that even for HPC it is too expensive. I guess HPC would need a lot more than 4GB or 8GB.
As for people turning their noses up at HBM, I though people were just weren't impressed with the 4GB of Fury.
And I guess AMD weren't impressed with allegedly loosing money on Fury and Vega.
Pity as AMD spent a lot of money developing HBM and all they have to show for it is the Wikipedia entries:
https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Development
In fact, I think Nvidia have done better out of it despite not being involved with its development simply because they sell a lot more high-end compute cards where HBM really helps.