HEXUS Forums :: 84 Comments

Login with Forum Account

Don't have an account? Register today!
Posted by MrJim - Wed 19 Sep 2018 14:16
Impressive performance…but what a price! Are there really that many people willing to drop over a grand in order to play PC games?
Posted by CAT-THE-FIFTH - Wed 19 Sep 2018 14:22
MrJim
Impressive performance…but what a price! Are there really that many people willing to drop over a grand in order to play PC games?

I have started a review thread:

https://forums.hexus.net/graphics-cards/395054-nvidia-rtx-series-review-thread.html

Some of the review sites have been quite positive and others not so much.

Also remember,the FE this gen are apparently far better than the previous FE cards(apparently).

Edit!!

Apparently according to Asus,Turing was meant to be on Samsung 10nm but was backported to TSMC 12NM intsead(which is basically TSMC 16NM),so they had to go with mahoosive chips.
Posted by Tunnah - Wed 19 Sep 2018 14:33
Genuine question: historically speaking, has a new product ever charged the consumer for the performance gain over the previous product before ?

I'm having a hard time thinking of anything. Normally the new product gives you a performance boost, for a generally acceptable price increase on the previous product. It's never “this is 40% faster, so it's 40% more expensive” (it's actually more than that in this case).

I know we've passed on the good ol' days of getting new gen at the price of the previous model (200 to 600 were the same price! Near enough anyway) but this price increase is disgusting.
Posted by CAPTAIN_ALLCAPS - Wed 19 Sep 2018 14:34
It is a pity that Intel cannot get their graphics cards out to compete with this generation, given how much silicon looks like it has been dedicated to special features.

Whilst I really like the look of DLSS I would be much more tempted to buy 545mm2 / 754mm2 of pure current day graphics performance over promises (plus some more silicon wasted on raytracing, which I cannot see as anything more than a gimmick on this generation).
Posted by CAPTAIN_ALLCAPS - Wed 19 Sep 2018 14:37
CAT-THE-FIFTH
I have started a review thread:

Also remember,the FE this gen are apparently far better than the previous FE cards(apparently).


According to Buildzoid, the 2080Ti board - notably the power delivery - is so well built that there is little for third party vendors to improve on.

So getting a third party release card - which is often a custom cooler on top of a reference board - is probably fine this time round.

The only problem is the outrageous price gouging.
Posted by ksdp37 - Wed 19 Sep 2018 14:41
Yes, I'm quite impressed as I play 4K (have been for a while) and have to turn down a few settings to have a good frame rate. But then, why do I really need to see each strand of hair, how detailed the grass is, which direction the wind is blowing from and if the tree branches are swaying correspondingly.

It's nice to have some realism, otherwise we might as well stick to early 2000 games, but at 4K, I'm already capturing a reasonable amount of detail anyway, so probably want to focus more on the gameplay than the scenery.

Plus a ~35% gain over the 1080Ti at 70% extra cost is a big no no from me. I'll wait for the next gen cards or when the 2080 ti's are cheap on eBay!.

Let see what the red team comes up with in comparison. Even if it's between the 1080Ti and 2080Ti, if the price is right…..
Posted by Kanoe - Wed 19 Sep 2018 14:48
Hexus, think you missed a bad. The Ti failed the Time Spy Stress Test showing that the cooling might not be quite up to it or they have scaled the fan profile back to ensure they don't make too much noise.

Also for current games as you pointed out its wasted at FHD or QHD resolutions but then with ray tracing to get a good frame rate you will want to run these resolutions. So it doesn't seem to fit.
Posted by Kanoe - Wed 19 Sep 2018 14:52
Also interesting to see that the chip on the Ti is rotated 90 degrees compared to non Ti in order to fit everything on the board.
Posted by CAT-THE-FIFTH - Wed 19 Sep 2018 14:54
CAPTAIN_ALLCAPS
According to Buildzoid, the 2080Ti board - notably the power delivery - is so well built that there is little for third party vendors to improve on.

So getting a third party release card - which is often a custom cooler on top of a reference board - is probably fine this time round.

The only problem is the outrageous price gouging.

It also means the previous FE are comparative worse off,but the new FE does look really well built though. I wonder who the OEM for them is??

ksdp37
Yes, I'm quite impressed as I play 4K (have been for a while) and have to turn down a few settings to have a good frame rate. But then, why do I really need to see each strand of hair, how detailed the grass is, which direction the wind is blowing from and if the tree branches are swaying correspondingly.

Some of the earlier games,like some from the FarCry series had dynamic weather systems,and dynamic fire systems,so if something caught fire it would spread in the direction of the wind and cause you or enemies danger,and damage stuff. Games like Red Faction and even Crysis had buildings which could be destroyed,etc.

It seems too many games nowadays seem to have forgotten to do things which were done yonks ago.
Posted by Andi-C - Wed 19 Sep 2018 14:55
Would like to see how a custom/3rd party 1080ti stands up to ether RTX FE ?
Posted by MrJim - Wed 19 Sep 2018 15:01
Looking at the various reviews, it seems that gaming at 1440p with a 2080ti is oftem cpu limited, even on an 8700K @ 5Ghz. So to get the most out of the 2080ti you should really be gaming at 4K, and to get the most out of that, you'll really be wanting a 4K 144Hz monitor. So that'll add another 2 grand to your system cost!
Posted by CAT-THE-FIFTH - Wed 19 Sep 2018 15:04
MrJim
Looking at the various reviews, it seems that gaming at 1440p with a 2080ti is oftem cpu limited, even on an 8700K @ 5Ghz. So to get the most out of the 2080ti you should really be gaming at 4K, and to get the most out of that, you'll really be wanting a 4K 144Hz monitor. So that'll add another 2 grand to your system cost!

Apparently according to DICE if you want to run BFV with RTX they suggested going for at least 6C/12T IIRC,so it will be interesting to see CPU benchmarks with RTX one in the supported games.
Posted by c12038 - Wed 19 Sep 2018 16:16
Just noticed on the GPUz its got Beta drivers 411.51 these are nowhere to be had on the internet are these exclusive to Hexus
Posted by CAT-THE-FIFTH - Wed 19 Sep 2018 17:03
The RTX2070 is now nearly £600 it seems,which means there is a good chance a GTX2060/RTX2060 is going to be closer to £400 at this rate,and that assumes the same exchange rate today.

Also the Hexus review said this:

The RTX 2080 Ti is not the full implementation of the TU102 die. Rather, with it having a couple of SM units switched off and a narrower 352-bit memory bus - TU102 is specced with 384 bits - we can say that, clock for clock, it ought to provide about 95 per cent of the possible performance. Nvidia, it seems, keeps the full-fat die for the Quadro RTX 6000 chip that costs a whole heap more.

So that means a future Titan will cost even more.

The whole range has been pushed up now.

The GTX1080FE launched at $599($699 for the FE) minimum pricing and the GTX1070 at $379($449 for the FE) minimum pricing.





Months later when the full range was launched the GTX1080TI started at $700 and the Geforce Titan at $1200. The GTX980TI started at $650. So that is an increase in the 80TI series pricing from $650 to $1200 over three generations. The 70 and 80 series have gone up a minimum for $100 to $120,over the original launch,but when compared to when the full Pascal lineup was launched it is more like $150 to $200. Compare that to the GTX970 and GTX980 pricing too.

Now,also add the fact the pound is now weaker since the GTX1070/GTX1080 launch too.
Posted by kalniel - Wed 19 Sep 2018 17:41
Wow. The 580 really spanks the 1060.
Posted by Spud1 - Wed 19 Sep 2018 17:56
I am really pleased with the benchmark results personally - they show major performance improvements @ 1440p/ultra which is exactly what i was hoping for - seems to be an average of 40-50% increase in FPS compared to my existing 1080 FE. I am not remotely interested in 4K gaming until you can run @ ultra settings averaging *over* 60fps - but the generation is a big step towards that so we might finally hit that point in 2 years, when I might be ready to upgrade again.

It's a big upgrade based on the benchmarks - only real question for me now is will Nvidia get my card shipped today or not?

I do find it entertaining how many people are “surprised” that the 2080 is only a bit faster than the 1080ti in current games (or the same in some) - surely that was to be expected? Top end card from last year is roughly the same as a mid range card from this year. Seems normal to me.
Posted by CAT-THE-FIFTH - Wed 19 Sep 2018 18:12
People have to consider(as some others have said here),that all these RTX stuff,etc looks like Nvidia trying to use the commercial aspects of their GPUs for gaming.

Remember until Maxwell,the top GPU core made by AMD or Nvidia was used for dual purposes - for top end gaming and for commercial purposes. However,this has lead to issues with larger chips and features not used for gaming. With Maxwell and Pascal Nvidia bifurcated the lines,one set orientated towards normal FP32 performance used in games and the other towards FP64/FP16 based compute operations,etc. Now,with Turing they can start to go back to using one line,as they are trying to use all the stuff in the GPU which would not be used normally for games,in gaming,ie,its not wasted.
Posted by watercooled - Wed 19 Sep 2018 18:19
Wow, I'm really quite surprised how poorly Turing fares in reviews. Sure, we weren't realistically expecting 28nm Maxwell > 16nm Pascal uplift in performance but reviews really aren't selling Turing, and that surprises me given Nvidia's ability to market their way out of situations. Not even a demo of RTX stuff to say 'yeah but look at what it will be like!'

2080 is basically 1080Ti performance for… more money? Deal of the century there Nvidia.

And before anyone points at RTX, are you really suggesting paying that sort of money to be a beta tester for completely unproven technology given there are no actual games out yet? It's a complete unknown at this point.

WRT comments on 2080 performance vs 1080 - OF COURSE IT IS!??!! It's far more expensive, just like a 1080Ti is more expensive, will probably drop in price shortly, and gives you roughly the same in existing games. People aren't necessarily surprised at Nvidia's arbitrary branding of their products, more at the fact that you get nothing more for the same money. The difference this generation is that the ‘Mid’ hard (lol) has the pricing of the top-end card from last year. That's not even a sensible point to be bringing up!
Posted by CAT-THE-FIFTH - Wed 19 Sep 2018 18:32
My major technical complaint is that the RTX2080 appears to have dropped 3GB of VRAM compared to the GTX1080TI,which is an issue if you are running games at higher resolutions,or do things like modding.
Posted by Iota - Wed 19 Sep 2018 18:48
Nvidia is therefore taking a calculated gamble that an array of games developers will integrate the necessary support for its RTX cards to truly shine. They are pregnant with performance promise, evidenced by the brief DLSS and ray tracing evaluation, and they will begin to make more sense as and when software catches up with all-new hardware.

Here in lays the crux of the problem with the new cards. Absolutely when it works DLSS is a pretty marked change in performance, as PC gamers though we all know well enough how much of a struggle it is to get developers to do anything other than port the console version of their games to PC. Nvidia has taken a massive gamble here and priced accordingly (exorbitantly) on the premise that other parties not subject to Nvidia's whims comply.

Why would developers? They'll develop for the mainstream market, with the odd outlier developer here and there. At the prices Nvidia is charging, they'll not gain the market share they'll require with the new cards and new features to really be able to push for that anytime soon. Unless of course they have some sneaky planned obsolescence for older hardware in the pipeline a couple of years down the line? Which means, it isn't going to happen for the mainstream for a few years yet, so why buy now for something unsupported?

Unless I'm missing something and DLSS is coming to all recent generations of Nvidia cards Soon™?
Posted by Kanoe - Wed 19 Sep 2018 21:18
It could be that the inflated prices for the new tech that we as consumers have to pay if we want the cards is going to be used by nvidia to bribe the game devs to include these features in their games and this is a way that nvidia can do it without having to spend too much of their own cash to do it.
Posted by chj - Wed 19 Sep 2018 21:23
Iota
Here in lays the crux of the problem with the new cards. Absolutely when it works DLSS is a pretty marked change in performance, as PC gamers though we all know well enough how much of a struggle it is to get developers to do anything other than port the console version of their games to PC. Nvidia has taken a massive gamble here and priced accordingly (exorbitantly) on the premise that other parties not subject to Nvidia's whims comply.

Why would developers? They'll develop for the mainstream market, with the odd outlier developer here and there. At the prices Nvidia is charging, they'll not gain the market share they'll require with the new cards and new features to really be able to push for that anytime soon. Unless of course they have some sneaky planned obsolescence for older hardware in the pipeline a couple of years down the line? Which means, it isn't going to happen for the mainstream for a few years yet, so why buy now for something unsupported?

Unless I'm missing something and DLSS is coming to all recent generations of Nvidia cards Soon™?

It's worth just observing this generation to see how developer uptake goes but it's got to start somewhere right? Incentives for developers could be getting the extra frames using DLSS means more leeway in other areas but that doesn't matter if it's not coming to gpus lower in the series. Plus i guess it depends on how easy Nvidia have made it is for developers to implement and do the whole computational work for users to download. So basically I have no idea and I'm just going to wait and see.
Posted by darcotech - Wed 19 Sep 2018 21:48
Whoever buys this at this price should knows that they are the ones making possible for companies like nVidia and Apple to up the price just like there's no tomorrow.

I can not understand no one that “needs” to buy this right away, without waiting at least for end of the year and possible AMD's answer. I wouldn't even consider this for a new rig, cause this generation will not last long.

nVidia wants their cards on 7nm so that the die can be smaller, so that the chip can be cheaper for them. So you can expect already next summer new cards. Better, faster, more features.

Be smart.
Posted by warejon9 - Wed 19 Sep 2018 22:03
Has nobody noticed the TAA benchmark when pre DLSS the 2 cards performed the same?
Posted by outwar6010 - Wed 19 Sep 2018 22:53
Posted by =assassin= - Wed 19 Sep 2018 23:40
Nice review, nice performance, terrible pricing. It's getting scary how much graphics cards are costing now. It seems like we are forced to pay more and more for every new generation, far beyond inflation. For that reason, reviews of cards like this are just there for me to look at fancy numbers, and dream, since I'll never in a million years spend such silly money on a single component.
Posted by Safetytrousers - Thu 20 Sep 2018 02:14
Iota
Why would developers? They'll develop for the mainstream market, with the odd outlier developer here and there.

Because NVIDIA have liaised closely with the development of a number of games for some time now. They have the capital to incentivize developers to integrate RTX into games.
Posted by nobodyspecial - Thu 20 Sep 2018 04:08
What price gouging? Hexus themselves said cards are being made for barely under MSRP. Suggesting near zero margins here, probably caused by the 12nm switch from the original 10nm. Also they probably paid a premium for the first batch of GDDR6.

It's comic how much people hate NV, yet their the only ones in the game able to pay for R&D for new tech. How do they pay for it? Oh, yeah, they actually price products to make profit, unlike AMD who listens to all these whiners who kill their company as they follow customer advice on how to price products (give it to me free, or price is too high…LOL)…YOU are the reason AMD sucks. YOU keep asking them to give away their stuff cheap, and stupidly they think that is going to win a war of dollars…LOL.

NO, you win by actually winning PERF, WATTS, HEAT, you know, these things that actually matter. It's also comic most of you will whine and then buy NV anyway…ROFL. Sales figures don't lie, revenue+income+margin etc records don't lie at NV either. ;) 10 years of CUDA took a lot of money and R&D. AMD can't do stuff like that without money. When they can afford something, it's a 2nd best product; Freesync vs. Gsync for example. “Good enough” doesn't make you rich. The best brings top dollar.

I see a stock drop at AMD incoming…I don't think 7nm will double AMD perf, and I don't see why devs won't want to move to NV's idea of the future since it seems to work, and work fast while looking better. AMD either has to come up with something better, or end up getting further behind. IF I was NV, I'd start throwing ~100mil or more at devs to specifically use DLSS etc. You should be able to kickstart 25-50 games or so with that kind of effort and make this a standard (2mil per game just to code for something that makes your game actually better?). Business is war and NV is winning it. It's unfortunate AMD still hasn't figured out how to price anything or even how to make basic business deals beneficial to the company. See console margin deal…ROFL, wafer start deal (7th rev, still killing them…ROFL), etc, they clearly have some of the worst negotiators on earth running AMD (about as bad as govt. pre trump). :)
Posted by FRISH - Thu 20 Sep 2018 04:53
nobodyspecial
What price gouging? Hexus themselves said cards are being made for barely under MSRP. Suggesting near zero margins here, probably caused by the 12nm switch from the original 10nm. Also they probably paid a premium for the first batch of GDDR6.

It's comic how much people hate NV, yet their the only ones in the game able to pay for R&D for new tech. How do they pay for it? Oh, yeah, they actually price products to make profit, unlike AMD who listens to all these whiners who kill their company as they follow customer advice on how to price products (give it to me free, or price is too high…LOL)…YOU are the reason AMD sucks. YOU keep asking them to give away their stuff cheap, and stupidly they think that is going to win a war of dollars…LOL.

NO, you win by actually winning PERF, WATTS, HEAT, you know, these things that actually matter. It's also comic most of you will whine and then buy NV anyway…ROFL. Sales figures don't lie, revenue+income+margin etc records don't lie at NV either. ;) 10 years of CUDA took a lot of money and R&D. AMD can't do stuff like that without money. When they can afford something, it's a 2nd best product; Freesync vs. Gsync for example. “Good enough” doesn't make you rich. The best brings top dollar.

I see a stock drop at AMD incoming…I don't think 7nm will double AMD perf, and I don't see why devs won't want to move to NV's idea of the future since it seems to work, and work fast while looking better. AMD either has to come up with something better, or end up getting further behind. IF I was NV, I'd start throwing ~100mil or more at devs to specifically use DLSS etc. You should be able to kickstart 25-50 games or so with that kind of effort and make this a standard (2mil per game just to code for something that makes your game actually better?). Business is war and NV is winning it. It's unfortunate AMD still hasn't figured out how to price anything or even how to make basic business deals beneficial to the company. See console margin deal…ROFL, wafer start deal (7th rev, still killing them…ROFL), etc, they clearly have some of the worst negotiators on earth running AMD (about as bad as govt. pre trump). :)

I was wondering what sort of person would write all of that in such a tone until I read the last 2 words.

Anyway, it's getting quite absurd the cost of a single component. If the prices didn't rise so much the cards would be more impressive, otherwise eh, you're gambling on new tech, and even then going first gen on the RTX for ray tracing may not be a wise choice.
Posted by Korrorra - Thu 20 Sep 2018 06:11
Looks like I know which sites to trust and which ones have signed the NDA to kiss nvidia's ass.
Posted by Troopa - Thu 20 Sep 2018 06:35
You must be a COMPLETE IDIOT to buy a 2080 et al. The performance gains look awful compared to the 1080ti. For me I am hoping the 1070 comes down in price a bit as this seems to be all i need for my gaming build.

It was bad enough being a Kid and needing to buy a £150- £250 3DFX card for Quake/Doom was a lot, but compared to nowadays its not even funny…
Posted by Corky34 - Thu 20 Sep 2018 08:20
Out of curiosity and because i don't even know where to start with maths, i was wondering, roughly, how much more performance in games would have been gained if they hadn't segmented Turing?
If all that die space they dedicated to INT32, RT ‘cores’, and Tensor ‘cores’ were replaced with a load more of the old style mixed precision CUDA ‘cores’.
Posted by Spud1 - Thu 20 Sep 2018 09:06
Well my RTX2080 is now out for delivery, will be here today :)

If you are hating on it, enjoy your anger and frustration while I will enjoy a nice upgrade over my existing GTX1080. Sold my 1080 for £350, so a net upgrade cost of £400 for a 40-50% boost (before DLSS is taken into account) is awesome.

edit: just arrived in fact! Now I face a long 6 hours left at work! glad I came in early today :)
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 09:26
The amount of criticism on Hexus is utterly tame when compared to OcUK and even the Nvidia Reddit seems to have criticism too(!).

I started up the RTX series review thread on OcUK and it hit 10k views yesterday alone and most of the posts were criticisms with one owner trying to fight loads of posters too. TBH,this has not only been a weird launch but not seen such general negativity for a while. There was some moaning about Pascal pricing but most of it was the lack of proper stock for months since it sold out.

Edit!!

I can see why - first people expected Vega would come in and help drop prices but it was a flop so that didn't work out.

Then mining came and prices jumped high.

Then mining went down temporarily and prices dropped to launch levels.

Then people expected the new gen would probably drop old gen prices a bit like what happened with Pascal.

Except the new gen is so high priced relative to its launch performance that Pascal does not need to drop much.

If you look at the deals section at the end of last year there were as good or even better deals on say a GTX1080 than now for example.

Then RAM pricing has only dropped slightly so perhaps it's just a general sense of frustration methinks.
Posted by Spud1 - Thu 20 Sep 2018 09:49
OcUKs forum being saltier than Hexus? I'm shocked :) :) :)

I get that people don't like the price increases, but we already know that Nvidia are not making much per sale on these based on the build cost (thats before you think about the R&D costs!). I always find it odd that people expect major companies to sell their products at a loss or break even point .

You are probably right that its just general frustration about the fact that these new cards are more expensive than the last generation. That's made worse by people trying to make direct comparisons which is confused by Nvidia's branding.

There are high end cards with a high end price tag - not really “mainstream” so to speak but that's OK imo - the mainstream cards will come later at much cheaper prices.
Posted by kalniel - Thu 20 Sep 2018 09:50
Spud1
I get that people don't like the price increases, but we already know that Nvidia are not making much per sale on these based on the build cost (thats before you think about the R&D costs!).

I thought margins had been continually increasing?
Posted by Spud1 - Thu 20 Sep 2018 09:57
kalniel
I thought margins had been continually increasing?

I would expect as a company it would do - particularly as margins on the GTX ranges will have improved and Nvidia still sell a tonne of older cards at huge margins - but I've not seen anything to say that the RTX range has an improved margin. They are at something like 30% (for the company as a whole, not on one particular range!) net atm iirc.

We'll find out more when we get more teardowns and chip analysis of these individual cards.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 10:01
Spud1
I get that people don't like the price increases, but we already know that Nvidia are not making much per sale on these based on the build cost (thats before you think about the R&D costs!). I always find it odd that people expect major companies to sell their products at a loss or break even point .

Nvidia gross margins are now more than Intel:

https://ycharts.com/companies/NVDA/gross_profit_margin
https://ycharts.com/companies/INTC/gross_profit_margin

Nvidia net margins are more than Intel:

https://ycharts.com/companies/NVDA/profit_margin
https://ycharts.com/companies/INTC/profit_margin

Enthusiasts on tech forums for years were defending Nvidia's higher prices at each generation. Nvidia's net margins used to be between 10% to 20%,but are close to 40% now.

Intel net margins used to be 15% to 20% but now are 20% to 30% now,so apparently Intel has more “reasonable” prices relative to production and R and D costs! :p

R and D costs might be a consideration,except for one thing - it appears the professional/consumer line split which happened at Maxwell,where Nvidia developed two different lines,ie,when focusing on FP32 workloads(gaming cards) and that for non-FP32 workloads(commercial) has ended. Now they are starting to go back to the old way of having one single line of GPUs. This alone will help reduce R and D costs,and also the costs of chip tape-out.

This is what one or two said here before(Corky I believe was one of them),Nvidia has found a way to shoehorn commercial features into games. Hence they will progressively drop cards with an FP32 based focus which all more recent gaming cards to this present day have become.

Edit!!

Also that rumour of production costs?

Are you regurgitating Wccftech?? It came from there and was something they made up!! :p
Posted by Corky34 - Thu 20 Sep 2018 10:02
Spud1
….but we already know that Nvidia are not making much per sale on these based on the build cost (thats before you think about the R&D costs!).

We do? I didn't know Nvidia have said what the build costs are, i thought it was speculation that they're costing a lot to build, speculation i personally disagree with as the RTX's are just down binned Volta's that would have ended up in landfill if they'd hadn't worked out a way to make the unique features of Volta relevant to ‘gamers’.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 10:08
Corky34
We do? I didn't know Nvidia have said what the build costs are, i thought it was speculation that they're costing a lot to build, speculation i personally disagree with as the RTX's are just down binned Volta's that would have ended up in landfill if they'd hadn't worked out a way to make the unique features of Volta relevant to ‘gamers’.

From Wccftech:

https://wccftech.com/nvidias-next-generation-graphics-cards-specifications-pricing-and-nomenclature-details/

No link to sources so it was probably speculation by them. Remember,it was said Pascal cost a lot of money too due to the node shrink and expensive GDDR5X,etc but Nvidia margins grew too.

Edit!!

Spud1
There are high end cards with a high end price tag - not really “mainstream” so to speak but that's OK imo - the mainstream cards will come later at much cheaper prices.

That is the problem there. Normally a higher product does not bother me,but with graphics cards,it sets pricing at the lower end.

The RTX2070 will come to nearly £600 - so unless Nvidia has a sudden £350 gap to the GTX2060,its going to be one of two things:
1.)The GTX2060 moves upto closer to £400
2.)They split the 60 series line,so a GTX2060TI,GTX2060,GTX2060SE,etc

If the current pricing tier holds,the 60 series will eventually be shifted to the £400 mark. The $250 to $400 mark has been where the 70 series has existed for generations.

So all the websites will nicely compare 60 series to 60 series saying its a great performance bump,but for most mainstream purchasers who are more budget locked,that £200 card to £200 card upgrade might not look as hot anymore,as the range has been spread out.

So either stump up the extra cash for a decent upgrade or wait longer.
Posted by Tunnah - Thu 20 Sep 2018 10:16
I put the price increase down to the massive stocks of 10 series cards, not to mention them still being very viable. In generations passed, the previous cards tended to be struggling by the time the new ones come out. The 10 series is very much still an extremely potent card, and for all this talk of “finally a true 4K60 card” the 1080Ti is absolutely fine for 4K.

They have no reason to lower the prices because the old cards are still very much worth their money, so they're treating them as current; instead of a decent performance boost for a modest price increase, as is tradition, it's more like an even higher powered 10 series card for even more money.

That and I definitely feel like we're paying extra for tensor and RTX just so they can get it out the door and work on it in future releases, to get us used to it now.
Posted by Tabbykatze - Thu 20 Sep 2018 11:13
Posted by Roobubba - Thu 20 Sep 2018 11:37
So it looks like 1440p is now finally going to be playable (ie 120fps+) for most titles. A while to wait for 4K it seems.

:)

But at this price? No chance.
Posted by DanceswithUnix - Thu 20 Sep 2018 11:52
CAT-THE-FIFTH
This is what one or two said here before(Corky I believe was one of them),Nvidia has found a way to shoehorn commercial features into games. Hence they will progressively drop cards with an FP32 based focus which all more recent gaming cards to this present day have become.

But the 2080 cards still have an fp32 focus. Volta could do fp64 at half the rate it could do fp32, like you expect from a commercial focused card. The 2080 can do fp64 at 1/32 of the fp32 rate, like you expect from a consumer card.

Whilst I'm sure the tensor cores will be what they developed for commercial users, their inclusion does not make it a commercial card. It just means Nvidia think the feature is worth the sacrifice in silicon area over putting more shaders in. Then there is the ray tracing support; is there any support for that in commercial render engines? Something that gives an iffy quality lighting system good enough for action games isn't likely to impress the likes of Pixar in rendering their latest movie where every pixel should be spot on.

So AFAICS this is a consumer part, probably a Volta with the FP64 stripped out and some raytrace tech added giving a slightly smaller die that GV100. Given Volta and Turing are both 12nm products, I wonder if Nvidia have done exactly the same commercial/consumer split as before they just staggered the release.


As an aside, there was a die shot of a Turing compute unit that implied a quarter of the area was for tensor cores and a quarter for RT, so they could have had twice the shaders if they cut those out and scaled up the number of CMs to fill the space. I have to wonder what that would do the things like the anti-alias performance if it could generate sample spots at twice the throughput.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 12:06
DanceswithUnix
But the 2080 cards still have an fp32 focus. Volta could do fp64 at half the rate it could do fp32, like you expect from a commercial focused card. The 2080 can do fp64 at 1/32 of the fp32 rate, like you expect from a consumer card.

Whilst I'm sure the tensor cores will be what they developed for commercial users, their inclusion does not make it a commercial card. It just means Nvidia think the feature is worth the sacrifice in silicon area over putting more shaders in. Then there is the ray tracing support; is there any support for that in commercial render engines? Something that gives an iffy quality lighting system good enough for action games isn't likely to impress the likes of Pixar in rendering their latest movie where every pixel should be spot on.

So AFAICS this is a consumer part, probably a Volta with the FP64 stripped out and some raytrace tech added giving a slightly smaller die that GV100. Given Volta and Turing are both 12nm products, I wonder if Nvidia have done exactly the same commercial/consumer split as before they just staggered the release.


As an aside, there was a die shot of a Turing compute unit that implied a quarter of the area was for tensor cores and a quarter for RT, so they could have had twice the shaders if they cut those out and scaled up the number of CMs to fill the space. I have to wonder what that would do the things like the anti-alias performance if it could generate sample spots at twice the throughput.

Because you are thinking of old skool commerical usage tho - the commercial AI and RT stuff Nvidia does is also very dependent on other stuff outside FP64. The first cards Nvidia talked about were using Turing for commercial usage not gaming and the top bins are commercial cards.

The current large chips also make much more sense for commercial use scenarios than gaming and it means one line needs to be developed and that gamers get the rejected bins which can run at higher TDPs.

If Nvidia developed Turing with gaming in focus,not having all that die area for tensor cores and AI stuff would mean loads of normal shaders,and a much bigger performance bump for normal games.

The fact they have managed to shoehorn usage of more commerical oriented features for games,is the genius move methinks as they can re-use lower bin GPUs now in their gaming lines.

Expect a move away from FP32 focus for their gaming cards as their commercial usage areas are not so reliant on it anymore.
Posted by spacein_vader - Thu 20 Sep 2018 12:10
Roobubba
So it looks like 1440p is now finally going to be playable (ie 120fps+) for most titles. A while to wait for 4K it seems.

:)

But at this price? No chance.
What the hell are you playing that requires 120+ FPS to be playable?
Posted by DanceswithUnix - Thu 20 Sep 2018 12:53
CAT-THE-FIFTH
Because you are thinking of old skool commerical usage tho - the AI and RT stuff Nvidia does is also very dependent on other stuff. The first cards Nvidia talked about using Turing for commercial usage not game and the top bins are commecial cards.

AI tensor stuff is everywhere, it is already in mass market phones. Frankly games seem to be lagging here. But for professional use, there are dedicated tensor processors which spells the end of using a GPU for those tasks. So that isn't a professional use.

I don't get the ray-tracing. I'm happy for someone to convince me that there are professionals who will lap that up, but I just can't see an example. Feel free to point me at software support that is relevant to professional users.

Now I did Google for OpenCL performance for the 2080 and found one example, the Luxmark Luxball HDR which the 2080ti most impressively monsters. That's nice for the people with that workflow, but they would have been well served by a 1080ti as well so once again I don't see that as indicating this is a professional chip.

https://www.engadget.com/2018/09/19/nvidia-rtx-2080-ti-review/
Posted by Roobubba - Thu 20 Sep 2018 13:03
spacein_vader
Roobubba
So it looks like 1440p is now finally going to be playable (ie 120fps+) for most titles. A while to wait for 4K it seems.

:)

But at this price? No chance.
What the hell are you playing that requires 120+ FPS to be playable?

When I switched from a 60Hz screen to a 144Hz screen, the difference in playability for fast-paced shooters (I was playing Natural Selection 2 competitively at the time) was incredible. 120Hz/ steady at fps with strobing is just objectively **so much better** than 60Hz. Can't go back to that slideshow now!
When you turn 180 degrees in a snap, you get twice as many frames during that turn - it makes it MUCH easier to spot either a fast moving enemy or a hiding enemy during the turn.

Though I understand that Freesync/Gsync is a massive boon - not had a chance to try this out yet - so I will concede that I don't know how gaming feels at lower framerates with adaptive refresh compared with my current setup.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 13:37
DanceswithUnix
AI tensor stuff is everywhere, it is already in mass market phones. Frankly games seem to be lagging here. But for professional use, there are dedicated tensor processors which spells the end of using a GPU for those tasks. So that isn't a professional use.

I don't get the ray-tracing. I'm happy for someone to convince me that there are professionals who will lap that up, but I just can't see an example. Feel free to point me at software support that is relevant to professional users.

Now I did Google for OpenCL performance for the 2080 and found one example, the Luxmark Luxball HDR which the 2080ti most impressively monsters. That's nice for the people with that workflow, but they would have been well served by a 1080ti as well so once again I don't see that as indicating this is a professional chip.

https://www.engadget.com/2018/09/19/nvidia-rtx-2080-ti-review/

But that is the point though - Nvidia is pushing deep learning and ray tracing massively for commercial usage and why the top bin Turing cards are not gaming ones.

You even contradict your own point. - if there are dedicated cards using tensor like cores then Nvidia putting a bunch of them in a graphics card shows they are serious about that market.

If you watched the Nvidia release talks they were showing commercial usage far more than gaming,so I see them trying to get into more non gaming markets.

They obviously are trying to break in SFX markets.
Turing has been developed first for Nvidia non commercial markets first and they will shoehorn the use of tensor and RT dedicated hardware for gaming.

It means less r and d on dedicated fp32 cards too. Less lines means less r and d and tape out costs.
Nvidia for the last decade has been spending billions to break out of PC gaming with even Tegra.

The fact is consoles are primarily based around fp32 performance and rasterisation. Even if Nvidia sells a million Turing cards this year most of the market will be on cards which are built for fp32 and rasterisation. Most games will be developed for that in mind.

So it makes less sense for Nvidia to decide to suddenly add RT and tensor cores for gaming when only a tiny fraction of games will be using and only people on the latest stuff.

Enthusiasts like us are niche especially when you consider it's MMOs and MOBAs which make up the lion's share of PC gaming revenue not FPS games. Many of these games have cartoony graphics which scale down better to slow cards and are more CPU limited especially as most people play at 1080p still.

It makes far more sense for Nvidia to develope these for higher margin commercial markets first and then use the runts for us gamers.
Posted by Corky34 - Thu 20 Sep 2018 13:52
DanceswithUnix
So AFAICS this is a consumer part, probably a Volta with the FP64 stripped out and some raytrace tech added giving a slightly smaller die that GV100. Given Volta and Turing are both 12nm products, I wonder if Nvidia have done exactly the same commercial/consumer split as before they just staggered the release.

From what i can tell the RT ‘cores’ are using the same mixed precision 32bit wide ALU's as the rest of the silicon, whether they've divided those mixed precision 32bit wide ALU's at the hardware level (using different sized/allocation of registers, caches, and other features) or purely at the software level is another question.

This Anandtech article talks about Tensor ‘cores’ in Volta but i assume some of what it talks about is transferable to how RT ‘cores’ are designed, if you really want to geek out the same article also links to some research conducted by Citadel LLC into the design of Volta using micobenchmarking.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 14:03
https://www.nasdaq.com/article/what-is-ray-tracing-and-why-should-nvidia-investors-care-cm1013070

Gaming is still growing, and live e-sports are especially on the rise as an emerging spectator event . Ray-tracing GPUs could help bridge the gap between the real and the digital, while better graphics could help attract more users of video games over time.

The really interesting use for ray tracing is show business, though. NVIDIA says the visual effects industry generates $250 billion a year. However, those familiar with visual effects know that ray tracing has been a part of the process for a while already.

Today, digital artists take their time creating scenes and tap the computing power of a server or “rendering farm” to create special effects and computer-generated scenes. The process is expensive (think tens of thousands of dollars, or more, for just a scene), and it can take days or weeks to complete.

NVIDIA's work on GPUs that can handle ray tracing could be a disruptor of that industry, saving filmmakers time and money. It could also open the door for artists to become entrepreneurs in the space once again, putting the heavy-duty computational powers of a server into a compact and affordable package. For investors, though, it means NVIDIA could have found yet another new application for its GPUs. NVIDIA is on pace to easily surpass $12 billion in revenue this year. Entering the $250 billion visual effects industry could go a long way towards unlocking more growth in the near future.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 17:09
People have noticed an interesting bug in the Bit-tech review.

TAA



DLSS



Look at the blue car. The DLSS image looks blurrier too. A bug?

Edit!!

https://bit-tech.net/reviews/tech/graphics/nvidia-geforce-rtx-2080-ti-and-rtx-2080-founders-edition-reviews/11/

Hmm,not sure what they are saying now.

In this instance, DLSS, despite being advertised as only having similar quality to TAA, was always the better method for image quality. You do have to look for the finer details, such as complicated foliage in the background, the creases of clothing, or hair, but in doing so you invariably find a better image with DLSS than with TAA (also confirmed by a few quick blind tests with nearby staff members). For example, sharpness would often be improved over TAA, which could sometimes suffer from a slight blur effect in comparison, and there were fewer artefacts on the DLSS side too. Occasionally, edges looked a touch jaggier with DLSS, but it remained the clear winner overall. There are some examples below: In each pair, the top images are captured using TAA and the bottom images with DLSS. We suggest opening them in different tabs or downloading and saving them, then zooming in to spot differences in the areas we mentioned.

?
Posted by Corky34 - Thu 20 Sep 2018 17:31
And that's on a fixed run benchmark that i assume they've had plenty of time to work on, it raises an interesting question.

What happens if a game is modded/patched and new assets are added? Could you fool the NN into thinking a dog is a banana like they did on BBC4's look at AI or something equally weird.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 17:40
Luckily someone on YT did a side by side comparison with an RTX2080:

https://youtu.be/Y_usUAXRnGg?t=91

I think BT mixed up the images TBH.

The first image is with DLSS and the second image is with TAA. The TAA image has more rust on the car,and the DLSS seems to have less when the main car drives in,but seems OK when the characters get out of the car. The DLSS image has more apparent detail but worse edges. The BT review is using an RTX2080TI,so I suspect it might be some kind of bug,which is more apparent on the RTX2080TI. I suspect the RTX2080TI is pushing more FPS than an RTX2080 so its putting more demands on the tensor cores to do the reconstruction work so there is more bugs.

Also,on OcUK someone pointed this out:

https://youtu.be/Y_usUAXRnGg?t=21

The face also has slightly different details too.
Posted by badass - Thu 20 Sep 2018 18:10
Tunnah
Genuine question: historically speaking, has a new product ever charged the consumer for the performance gain over the previous product before ?

I'm having a hard time thinking of anything. Normally the new product gives you a performance boost, for a generally acceptable price increase on the previous product. It's never “this is 40% faster, so it's 40% more expensive” (it's actually more than that in this case).

I know we've passed on the good ol' days of getting new gen at the price of the previous model (200 to 600 were the same price! Near enough anyway) but this price increase is disgusting.

Yes. Every time. For at least 18 years. Newer generations of Graphics cards have been creeping up in price for the top models for that long. Even taking into account inflation.

I first noticed this when the Geforce 2 Ultra was released for £400

It's just that internet forum wisdom for computer parts is about as good as for cars. i.e. mixed. Like the general impression that when a new range of graphics cards are released, the old ones miraculously drop in price. Actually, the price has been gradually dropping since release.

Secondly, on the price increase being disgusting, the 2000 Series both use a more expensive process to use along with larger dies. However they are simply charging what they believe the market will bear.
It's not a charity, nor is it essential. It's a hugely expensive hobby.
Posted by CAT-THE-FIFTH - Thu 20 Sep 2018 18:39
badass
Yes. Every time. For at least 18 years. Newer generations of Graphics cards have been creeping up in price for the top models for that long. Even taking into account inflation.

I first noticed this when the Geforce 2 Ultra was released for £400

It's just that internet forum wisdom for computer parts is about as good as for cars. i.e. mixed. Like the general impression that when a new range of graphics cards are released, the old ones miraculously drop in price. Actually, the price has been gradually dropping since release.

Secondly, on the price increase being disgusting, the 2000 Series both use a more expensive process to use along with larger dies. However they are simply charging what they believe the market will bear.
It's not a charity, nor is it essential. It's a hugely expensive hobby.

Actually 12NM is more or less the same as 16NM,but is lower leakage:

https://en.wikichip.org/wiki/16_nm_lithography_process

In late 2016 TSMC announced a “12nm” process (e.g. 12FFC) which uses the similar design rules as the 16nm node but a tighter metal pitch, providing a slight density improvement. The enhanced process is said to feature lower leakage better and cost characteristics and perhaps a better name (vs. “14nm”). 12nm is expected to enter mass production in late 2017.

TSMC lists 16NM/12NM as the same node level. If anything one could argue when Pascal was first released,16NM was somewhat less mature and more congested due to phone companies,etc wanting to make new chips on it.
Posted by utopic - Thu 20 Sep 2018 19:07
kalniel
Spud1
I get that people don't like the price increases, but we already know that Nvidia are not making much per sale on these based on the build cost (thats before you think about the R&D costs!).

I thought margins had been continually increasing?

Preposterous, the margins are not increasing. If anything, they're not making money, they're loosing money. Each time you're buying one their cards, they're practically pushing money into your pockets. The more they sell, the less money they have, it's a sort of reverse sales process.

The reason they're making more $$$ than Intel must be that they found a golden egg laying goose, that they've put hard at work, to make up for this charity of an operation. /:|
Posted by watercooled - Thu 20 Sep 2018 22:13
I see plenty of people claiming margin reductions etc without any substantiation whatsoever? Speculation at WCCF doesn't count as that, as much as fanboys will latch on to any such nonsense as justification.

12nm is not a ‘more expensive node’ vs 16nm - it's close to being the same thing, and given the maturity it will be significantly cheaper now than 16nm was on launch. Yes, it will cost more per die to produce vs Pascal given its much larger size, but lets not pretend it's close to being sold at a loss or Nvidia are being in any way charitable. Pascal will currently have huge margins.

And I don't think it's anything other than childish to dismiss criticism as people ‘hating on it’, especially in response to people making outright false statements or nonsensical value comparisons. It's currently a poor value upgrade over Pascal, and it's only really an upgrade at all if you're going with the 2080Ti given the 2080 is ≈ 1080Ti performance (and more expensive), especially given Pascal seems to overclock more readily.

As I said originally, my main surprise is how meh it looks in reviews, I was expecting at least a few half-decent tech demos to sell the price increase to the more gullible of the beta-testers!

Any speculation about the ray tracing stuff is just that at this point in time - speculation. If you're happy to take a ticket for that, fill your boots. I suspect most will wait until they know what they're actually buying.
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 01:33
watercooled
I see plenty of people claiming margin reductions etc without any substantiation whatsoever? Speculation at WCCF doesn't count as that, as much as fanboys will latch on to any such nonsense as justification.

12nm is not a ‘more expensive node’ vs 16nm - it's close to being the same thing, and given the maturity it will be significantly cheaper now than 16nm was on launch. Yes, it will cost more per die to produce vs Pascal given its much larger size, but lets not pretend it's close to being sold at a loss or Nvidia are being in any way charitable. Pascal will currently have huge margins.

And I don't think it's anything other than childish to dismiss criticism as people ‘hating on it’, especially in response to people making outright false statements or nonsensical value comparisons. It's currently a poor value upgrade over Pascal, and it's only really an upgrade at all if you're going with the 2080Ti given the 2080 is ≈ 1080Ti performance (and more expensive), especially given Pascal seems to overclock more readily.

As I said originally, my main surprise is how meh it looks in reviews, I was expecting at least a few half-decent tech demos to sell the price increase to the more gullible of the beta-testers!

Any speculation about the ray tracing stuff is just that at this point in time - speculation. If you're happy to take a ticket for that, fill your boots. I suspect most will wait until they know what they're actually buying.

You better not watch the DF review,its expensive but…but…but phones cost more and it must cost more to make,but OFC ignoring Nvidia making 40% net margins(not gross) and making Intel look a bit lame in that regard. Also it ignores the fact some people use a phone as their primary device for all their computing,gaming and photographic needs too,whereas a graphics card is somewhat more niche. Parts of the tech press for too long has just justified the price increases at every generation for lots of tech,and seem to be more concerned about the welfare of companies(not so much the consumers) so that more and more less well read people think its “justified” just like with phones,or microtransactions are normal,etc.

Sadly we might as well get used to it,as it is going to get worse methinks especially with the fact credit is still easy to get. The best thing is to vote with your wallet. No doubt people will still throw money at these things,and as time progresses companies will push prices higher and higher and people will start pricing themselves out of getting new cards and so on. At some point,a time will come when things get pushed too far and the money tree ends.
Posted by Tunnah - Fri 21 Sep 2018 02:09
CAT-THE-FIFTH
You better not watch the DF review,its expensive but…but…but phones cost more and it must cost more to make,but OFC ignoring Nvidia making 40% net margins(not gross) and making Intel look a bit lame in that regard. Also it ignores the fact some people use a phone as their primary device for all their computing,gaming and photographic needs too,whereas a graphics card is somewhat more niche. Parts of the tech press for too long has just justified the price increases at every generation for lots of tech,and seem to be more concerned about the welfare of companies(not so much the consumers) so that more and more less well read people think its “justified” just like with phones,or microtransactions are normal,etc.

Sadly we might as well get used to it,as it is going to get worse methinks especially with the fact credit is still easy to get. The best thing is to vote with your wallet. No doubt people will still throw money at these things,and as time progresses companies will push prices higher and higher and people will start pricing themselves out of getting new cards and so on. At some point,a time will come when things get pushed too far and the money tree ends.

My favourite thing is that people believe a company would reduce their margins to give the consumer more of a bargain lol.

If it cost more, you'd be paying more, that's a fact. And if it ACTUALLY cost more, you'd know about it due to the millions of press releases, tech talking points, and straight up propaganda videos explaining why they have to charge you so much.
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 02:15
Tunnah
My favourite thing is that people believe a company would reduce their margins to give the consumer more of a bargain lol.

If it cost more, you'd be paying more, that's a fact. And if it ACTUALLY cost more, you'd know about it due to the millions of press releases, tech talking points, and straight up propaganda videos explaining why they have to charge you so much.

It's a load of bollocks if you look at the same excuses made years ago on tech forums and the tech press. Companies seed these things out to increase margins,that costs are high, etc they need to go up since margins will be even lower.

I argued with so many internet experts who told me I was wrong and I said margins would go up and so would revenue,to record levels.

Its like if the cost of flour went up 10% and the baker charged your 40% more for a loaf of bread - little white lies can be helpful.

I was right over 5 years ago and I am still right now.

Unless you work for a company or financially benefit from them, a consumer should not give a flying fig about caring about maintaining margins outside a company staying afloat and being profitable.

But in the realworld people are not even that charitable.

Nobody on the real world cares how much margins Tesco makes as long as they can afford to keep food on the table.

Nobody cares about all the margins of high street retailers when buying off the internet, This has meant so many jobs are being lost,despite shops probably making worse margins as they have to employ more people,pay for shops,pay more taxes,rates,etc and internet retailers like Amazon using legal loopholes to avoid more tax,etc. A shop cannot compare in price due to the higher fixed costs.

Most consumers in the real world care less about companies and more about themselves.
Posted by DanceswithUnix - Fri 21 Sep 2018 08:15
Corky34
This Anandtech article talks about Tensor ‘cores’ in Volta but i assume some of what it talks about is transferable to how RT ‘cores’ are designed, if you really want to geek out the same article also links to some research conducted by Citadel LLC into the design of Volta using micobenchmarking.

Thanks, I'll have to dig that out.

That article makes it sound like the Tensor cores are a dedicated function, which makes some sense. AI isn't that sensitive to calculation precision, but data sets can be huge and some calculations are bandwidth limited making fp16 twice as fast for a given bandwidth and potentially faster if you avoid paging training data on and off the card because your vram can store twice as much compared with fp32. So tensor operations really want fp16; but frankly nothing else does. Adding fp16 support to the fp32 and fp64 cores makes them more complex and so potentially slows them down when nothing else uses fp16. I can see reasoning for this being a dedicated unit.

The AI anti-aliasing implies the tensor arithmetic isn't stomping on the shader performance too much, being outside the main fp32 pipe would probably help that.

The bit I don't get is the rumours that lower end cards won't have AI or RT support. A mid range card with half the bandwidth of a 2080 will have half the shaders serving a lower resolution with half the pixels, so RT support can be scaled in line with that. AI is a little more tricky, it isn't so useful to have half a brain (I've worked with people like that ;) ) but if AI is designed to run on a cpu you could still have the opportunity to offload that.
Posted by badass - Fri 21 Sep 2018 10:25
watercooled
I see plenty of people claiming margin reductions etc without any substantiation whatsoever? Speculation at WCCF doesn't count as that, as much as fanboys will latch on to any such nonsense as justification.
Agreed. Nvidias margins are clearly increasing. A simple look at their financial statements proves that.
12nm is not a ‘more expensive node’ vs 16nm - it's close to being the same thing, and given the maturity it will be significantly cheaper now than 16nm was on launch. Yes, it will cost more per die to produce vs Pascal given its much larger size, but lets not pretend it's close to being sold at a loss or Nvidia are being in any way charitable. Pascal will currently have huge margins.
Not sure I agree with that - simply because I think its bad business to sell capacity on a higher performance, lower power, lower area per transistor process for the same price. However, lets just say perhaps it does
And I don't think it's anything other than childish to dismiss criticism as people ‘hating on it’, especially in response to people making outright false statements or nonsensical value comparisons. It's currently a poor value upgrade over Pascal, and it's only really an upgrade at all if you're going with the 2080Ti given the 2080 is ≈ 1080Ti performance (and more expensive), especially given Pascal seems to overclock more readily.
Remember value is an personal thing. If a highly paid individual can pay a few hundred more for a device that saves 40% of their time, then it is exceptional value to them.
To me, personally I see any card costing over £300 as obscenely expensive so I regard the RTX 2000 series as exceptionally poor value. But I can see how someone with lots of money and nothing better to spend it on may see the substantial performance jump for a similarly substantial increase in cost as good value. Particularly if they just must have the current thing that produces the longest bars in a load of bar graphs.
As I said originally, my main surprise is how meh it looks in reviews, I was expecting at least a few half-decent tech demos to sell the price increase to the more gullible of the beta-testers!

Any speculation about the ray tracing stuff is just that at this point in time - speculation. If you're happy to take a ticket for that, fill your boots. I suspect most will wait until they know what they're actually buying.

However I have seen others call Nvidias prices for these cards “disgusting” and other such assertions. In reality it's just good business. The 2000 Series are more costly to produce (looking at die size alone, notwithstanding possible increases in cost of process). They have created a competitive moat by having the fastest cards money can buy so they get to enjoy fat margins. If the market will bear increased prices (which it clearly will) they would be doing a disservice to their shareholders by not charging what they can.

Gamers whinging about this need to learn that this is the real world. Imagined pricing curves that suit their worldview never happened. The only way for Nvidias margins to take a hit is for demand to be reduced. Either through competition or simply reduced demand for the performance on offer at the price they charge.

If Vega has held records for performance on release rather than trading blows with the GTX 1080, does anyone think the price would have been the same? Or more?
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 10:38
But people moaning about people moaning the price is high is even worse. In the realworld switch on the TV and watch the news. People moan about price increases all the time - nobody in the realworld cares outside enthusiasts on tech forums who seem to just want everyone to keep quiet about companies since they seem terrified they might offend them.

Nobody in the real world cares how much margins Tesco makes as long as they can afford to keep food on the table.

Nobody cares about all the margins of high street retailers when buying off the internet, This has meant so many jobs are being lost,despite shops probably making worse margins as they have to employ more people,pay for shops,pay more taxes,rates,etc and internet retailers like Amazon using legal loopholes to avoid more tax,etc. A shop cannot compare in price due to the higher fixed costs.

Look at people moaning about fuel going up,rail fairs going up,etc. Maybe there are real reasons why these are needed,but guess what?? People don't care.

Most consumers in the real world care less about companies and more about themselves. That is the REAL WORLD.

In fact on this very forum the hypocracy is terrible - people justifying that US tech companies should charge more,whilst moaning that Marmite went up. Last time I checked Marmite was made in the UK and the company which does it has far more people here than say Intel,Nvidia or AMD.

Where is all the justification for people to buy British and pay more for British?? Yeah,but some product made for relative peanuts in China,sure lets think of a poor company and its margins,and have most of that money flow out the country.

Its the same thinking supporters of microtransactions said - they attacked people who criticised them including famous YT channels,saying games cost more to make,shareholders needed to be considered and people should just suffer in silence,just like sycophants.Yet when EA went too far,people moaned plenty,boycotted the game,and EA then turned around to investors saying they didn't need microtransactions to make decent money from the game,which made all the excuse makers just clam up.

Companies should know entirely what people think of them just like people should have no fear of critcising politicians.

Edit!!

Lets look at tech forum logic - the Nvidia GTX970 mislabelling. People should have just shut up about that - what did all the complaining lead to?? Eventually Nvidia having to correctly label their products,and having to pay $30 back to many GTX970 buyers after a court agreed they had mislabelled the product:

https://www.pcgamer.com/uk/heres-how-to-claim-your-30-nvidia-geforce-gtx-970-settlement/
Posted by kalniel - Fri 21 Sep 2018 10:51
People moaning about moaning is even worse ;) Then there's the moaning about the moaning about the moaning etc..

Why not simply stop commenting about other posters, let's keep the discussion to the tech and tech companies.
Posted by Corky34 - Fri 21 Sep 2018 11:19
DanceswithUnix
That article makes it sound like the Tensor cores are a dedicated function, which makes some sense. AI isn't that sensitive to calculation precision, but data sets can be huge and some calculations are bandwidth limited making fp16 twice as fast for a given bandwidth and potentially faster if you avoid paging training data on and off the card because your vram can store twice as much compared with fp32. So tensor operations really want fp16; but frankly nothing else does. Adding fp16 support to the fp32 and fp64 cores makes them more complex and so potentially slows them down when nothing else uses fp16. I can see reasoning for this being a dedicated unit.

AFAIK all the ‘cores’ within Turing/Volta are dedicated function units, that is they're all mixed precision arithmetic logic units, what I'm less sure about is the width of those units (INT/FP 64,32,16,8,4), i can't remember if it's in the Volta/Turing white papers or the research paper publish by Citadel LLC (or exactly where i read it) but IIRC the data that makes up the tensor matrices (4x4 grid of 4bits of data, 2x2 of 8bits, etc) are gathered from register files and assembled into a single FP/INT 16/32 bit data block that's then processed by a 32bit MP ALU.

If i get the time I'll try to confirm that but having recently tried to explain something similar on another forum recently and it descending into an argument because the person i was discussing it with couldn't understand that there's a big difference between what something does and how it does it, it's probably best not to hold your breath. :)

EDIT: It was in the research paper publish by Citadel LLC (PDF Link), it seems i got it a little wrong as from what i can tell the matrix is compiled by NVCC at runtime, others (with more knowledge than i) may have a different reading of it..

Before invoking the matrix multiplication, programmers must load data from memory into registers with primitive wmma::load_matrix_sync , explicitly. The NVCC compiler translates that primitive into multiple memory load instructions. At run time, every thread loads 16 elements from matrix A and 16 elements from B.

There's also other references in the paper that talks about how matrices are constructed so it's worth a read IMO. :)
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 11:29
kalniel
People moaning about moaning is even worse ;) Then there's the moaning about the moaning about the moaning etc..

Why not simply stop commenting about other posters, let's keep the discussion to the tech and tech companies.

Its Friday…thank goodness the weekend is coming.
Posted by aidanjt - Fri 21 Sep 2018 11:30
badass
Gamers whinging about this need to learn that this is the real world.

You mean the target market of GeForce cards?
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 13:15
BTW,as a side note I have started a new thread for raytracing just to talk about the tech itself and any implementations(similar to the one I started over on OcUK too):

https://forums.hexus.net/pc-hardware-components/395162-raytracing-thread.html#post4013891

So any contributions will be greatly valued.
Posted by badass - Fri 21 Sep 2018 17:44
aidanjt
You mean the target market of GeForce cards?

Yep. The target market that moans about a companies behavior then buys from them any way.
Posted by badass - Fri 21 Sep 2018 18:00
CAT-THE-FIFTH
But people moaning about people moaning the price is high is even worse.

True. However moaning about moaning about moaning is worse than moaning about moaning. Moaning about moaning is worse than just moaning.
Posted by badass - Fri 21 Sep 2018 18:01
badass
True. However moaning about moaning about moaning is worse than moaning about moaning. Moaning about moaning is worse than just moaning.

And don't get me started about moaning about moaning about moaning about moaning. That's worse than moaning about moaning about moaning which is worse than moaning about moaning. Moaning about moaning is worse than just moaning.
Posted by watercooled - Fri 21 Sep 2018 18:22
badass
Not sure I agree with that - simply because I think its bad business to sell capacity on a higher performance, lower power, lower area per transistor process for the same price. However, lets just say perhaps it does
Fair point - I would imagine it's somewhat more expensive than the vanilla 16nm node now, comparing like for like, but I doubt it's substantially more and I would estimate (speculation) it's still cheaper than 16nm was on release. In my mind I was dismissing that it was as expensive as an entirely new cutting-edge node would be, which is somehow how I read your comment!

Agree with you on the rest.
Posted by watercooled - Fri 21 Sep 2018 18:26
Yep. The target market that moans about a companies behavior then buys from them any way.
And furthermore directly blame AMD for the situation. Bemoaning the lack of competition is one thing but they tend to be the same people who dismiss anything made by AMD anyway, then blame them when they focus on other market segments instead!
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 18:31
watercooled
And furthermore directly blame AMD for the situation. Bemoaning the lack of competition is one thing but they tend to be the same people who dismiss anything made by AMD anyway, then blame them when they focus on other market segments instead!

What is even worse is the same gamers making arguments cards need to cost more. Then moan when companies say oh sure BRB…then push up the prices since their potential customers seem to be justifying price increases. Then they bemoan the prices as being high.

Then the same gamers say I won't spend the extra money but end up getting a worse higher margin card even if they don't spend more. That card which might have been profitable at £200 is now £300 since people said they needed to be priced more.

They are literally repeating what Rollo said and don't realise it,who said stuff needed to cost more,stuff is expensive to develope,etc. I saw him on loads of forums before Hexus making those arguments. Now people are just repeating what NV FCG were repeating for years!!

Last time I checked it was marketing which was meant to do that not customers. It's like Turkeys voting for Christmas.

If I was a company I would charge as much as possible,and like I said many years ago,when all the experts argued with me,this would happen and it happened. The same people now are bemoaning the increases. BAHAHAHHAAHA!

I wonder if British Gas could get away with it?;)
Posted by watercooled - Fri 21 Sep 2018 18:46
TBH I suspect the current competitive landscape in the GPU market is what has allowed Nvidia to take this sort of a risk - if AMD currently had a product competitive with the 1080Ti in current games, and for a reasonable price, it would be even harder for Nvidia to convince people this is a worthwhile purchase at the asking price.

Given GPU development times we likely would have seen something similar sooner or later, but I've heard people suggest it wasn't originally intended to be a 16/12nm product? Maybe that has something to do with it?* If so, Nvidia really got lucky as any competition would have likely ate well in to their margins with such large dies, which is probably at least partly why they're shipping so many die variants this generation - past a point, die fusing probably doesn't make as much sense, especially on a fairly mature process.

*Then again, architecturally it seems very similar to Vega apart from the RTX cores, so in theory development time should be somewhat shorter if so.

/speculation

My point being, I can't see how it would have sold well at close to current prices in a more competitive environment. Either they got really lucky or there's far more breathing room in the margins than people seem to be believing.

But again, you can't blame the competition for not competing, when you don't consider them even when they are actually competitive, and/or split hairs over whatever the latest marketing buzzword happens to be.
Posted by CAT-THE-FIFTH - Fri 21 Sep 2018 18:57
watercooled
TBH I suspect the current competitive landscape in the GPU market is what has allowed Nvidia to take this sort of a risk - if AMD currently had a product competitive with the 1080Ti in current games, and for a reasonable price, it would be even harder for Nvidia to convince people this is a worthwhile purchase at the asking price.

Given GPU development times we likely would have seen something similar sooner or later, but I've heard people suggest it wasn't originally intended to be a 16/12nm product? Maybe that has something to do with it? If so, Nvidia really got lucky as any competition would have likely ate well in to their margins with such large dies, which is probably at least partly why they're shipping so many die variants this generation - past a point, die fusing probably doesn't make as much sense, especially on a fairly mature process.

/speculation

But again, you can't blame the competition for not competing, when you don't consider them even when they are actually competitive, and/or split hairs over whatever the latest marketing buzzword happens to be.

The problem is WC,the excuse makers years ago,thought they were being clever when they looked at certain very conveniently “leaked” documents and then on forums were saying things costs more,so its justified companies now added more tiers,etc and upping pricing massively. I told those experts to be wary and they still argued,and I told them you wait and see. Year on year revenue and margins went up. Yeah,they were wrong and I was right since its a trick used elsewhere. Marketing is far more clever than people think and market research has moved on a lot.

I mean not only Nvidia too,but Intel also had people saying on forums competition was BAD.

Also expecting AMD to save anyone is not realistic. If AMD themselves can see Nvidia by using simple marketing,can make their own consumers believe they need to be charged more 5+ years ago due to “costs”,then do you honestly think AMD is going to think,wait a second we should massively undercut them?? Why did AMD say they didn't want to be the “budget brand” - because the Turkeys voted for Christmas.

Why use a tactic of massively undercutting Nvidia?? If it didn't work before and now so called gamers are literally parrot repeating the same stuff Rollo was saying 10+ years ago to pay more for stuff,its happy days.

Instead of your own customers wanting better pricing,they just gave you a blank cheque to up prices. So what did they expect….prices to drop?? LMAO.

Polaris wasn't another HD4870 moment - AMD just poked Nvidia slightly not to provoked too much of a price war. Navi will be like Polaris - a gentle poke. Maybe a bit cheaper,but why bother being “too” cheap.

Why should they??

I knew this would happen years ago,and I wished to be wrong yet I was sadly right. People asked for this to happen really.

What to do,even if I keep within my own budget range,I am still enabling them(or either longterm stop playing any newish games).

Edit!!

General comment

Oh also something else - also in general I have seen the view a lack of competition means massively higher prices and its justfied. If market research sees that,then guess what a company is going to do??

Well if that is really the case then think back to the days of the 8800GT. ATI had the craptastic 2900XT which barely competed with the 8800GTS 640MB. Fast foward under a year,and ATI just made another cheaper 2900XT in the form of the HD3870. Nvidia responded with the 8800GT. Nvidia was utterly dominant.



The HD3870 was slower than the 8800GT 512MB,8800GTS 512MB,8800GTX,8800 Ultra,and barely quicker than an 8800GTS 640MB,and traded blows with the 9600GT,and 8800GT 256MB. ATI was probably more screwed competitively than AMD is now. They could barely get over £150 for a single GPU card!!

So 4 faster cards,and 2 cards which traded blows.

Yet the 8800GT 512MB was incredible value for money. If this was 2018,something like the 8800GT/8800GTS 512MB would have been priced much more,because “costs” and “lack of competition”.

That should have no bearing on how much YOU as a consumer pays for any product. That is the problem of the company TBH.
Posted by watercooled - Fri 21 Sep 2018 23:42
The state of the ‘enthusiast’ PC market is quite pathetic in more than a few areas now. It seems to be less about interest in hardware, engineering and actually thinking about a build, and more about just throwing as much money at the problem as possible like it's some sort of designer fashion accessory to brag about, and filling the result with hideous RGB lighting, hiding anything interesting-looking in the process.

But like I said, it comes to something when even sites I'd typically associate with being quite Nvidia-apologetic are finding it hard to say anything objectively positive about this release, forums in general seem to have little nice to say, and even share price has taken a dip. It's all a bit silly going ‘but but but raytracing… revolution… something’ when there's literally nothing to back it up. Sorry for sounding like a broken record but I'm just so surprised the reviews didn't carry anything to sell the big ‘selling’ features of the card - the things which supposedly make it worth the money. I was really anticipating reading some stuff about it and was ready to pick apart what was actually being benchmarked, where apples were being compared to oranges, etc. It's all just turned out a bit boring really.

I mean sure I would most likely have been ranting about something different then, but at least there would have been something new and interesting to discuss about the technology rather than just pricing and its justification!
Posted by DanceswithUnix - Sat 22 Sep 2018 07:06
Corky34
There's also other references in the paper that talks about how matrices are constructed so it's worth a read IMO. :)

Ta, I will try and have a proper read but a quick skim through the tensor part it looks interesting.

As for width, I got the impression that pre-Volta the ALU was mixed fp32+fp64+int but now the integer processing is stripped out into its own execution unit so address calculations can be done in parallel with the floating point. ISTR that fp16 was removed from gpus early on, around Riva TNT, and no-one missed it until now. I think in Vega AMD put the data type back in, but a dedicated fp16 matrix multiply-accumulate would seem sensible.

The bit I noticed from that pdf was the huge matrix size required to hit high performance, when AIUI most AI just uses 4x4 multiplies.

Edit: As for people complaining about cost of these cards, don't ever get into performance cars. Yachting is right out :D
Posted by Corky34 - Sat 22 Sep 2018 08:01
DanceswithUnix
As for width, I got the impression that pre-Volta the ALU was mixed fp32+fp64+int but now the integer processing is stripped out into its own execution unit so address calculations can be done in parallel with the floating point. ISTR that fp16 was removed from gpus early on, around Riva TNT, and no-one missed it until now. I think in Vega AMD put the data type back in, but a dedicated fp16 matrix multiply-accumulate would seem sensible.

Yea it was however i suspect that stripping out into individual units of FP and INT is done at the factory, possibly via firmware, as it doesn't seem logical to take what used to be a mixed precision unit and redesign it to only do INT or FP, from a failure POV it seems more logical to decide if a unit is going to do FP or INT after fabrication.

The FP/INT 32/64 part AFAIK appears to be something that's fixed at design and fabrication time, the width of a unit is a physical thing and ideally you want the width of the unit to match the width of the data going through it, while a MP INT/FP 64 unit can do INT/FP 32 work it's a waste of silicon and power.

DanceswithUnix
The bit I noticed from that pdf was the huge matrix size required to hit high performance, when AIUI most AI just uses 4x4 multiplies.

Is does but even a 4x4 grid (afaik, a grid can also do +, -, and / on a per grid basis) consist of 16 individual 2 digit numbers (32bits), or any variation that results in either 16 or 32bits (there's also 64bits but that's more for the professional cards), at least that's my understanding and I'd welcome the input from someone with more knowledge of Tensor programing.
Posted by Jace007 - Sat 22 Sep 2018 18:49
nope, i think i wont buy a GPU for at least 3 years, Just got myself a PS4 .. Gaming on PC is expensive trying to keep up with the latest tech only for a handful of really decent AAA games to show up after waiting years. Thanks but no thanks I'm out
Posted by DanceswithUnix - Sun 23 Sep 2018 11:36
Corky34
Yea it was however i suspect that stripping out into individual units of FP and INT is done at the factory, possibly via firmware, as it doesn't seem logical to take what used to be a mixed precision unit and redesign it to only do INT or FP, from a failure POV it seems more logical to decide if a unit is going to do FP or INT after fabrication.

You would never disable that at manufacture, they are designed like that. Or put it another way, fp circuitry is big so you wouldn't have any idle with only an int part enabled.

The int and fp logic are not the same, but by combining them you save instruction decoding logic etc. So they have decided to spend a few transistors to make use of int instructions being lower latency rather than having them go down the same long pipe as the floating point ops. Sounds like a marginal improvement at best, but I'm sure simulations would have shown an improvement for them to make the change.

The FP/INT 32/64 part AFAIK appears to be something that's fixed at design and fabrication time, the width of a unit is a physical thing and ideally you want the width of the unit to match the width of the data going through it, while a MP INT/FP 64 unit can do INT/FP 32 work it's a waste of silicon and power.



Is does but even a 4x4 grid (afaik, a grid can also do +, -, and / on a per grid basis) consist of 16 individual 2 digit numbers (32bits), or any variation that results in either 16 or 32bits (there's also 64bits but that's more for the professional cards), at least that's my understanding and I'd welcome the input from someone with more knowledge of Tensor programing.

I believe the width of the unit is 64 bits. You do a pair of 32 bit operations, or 4 lots of 16 bit operations. Hence in the fully 64 bit enabled Volta parts they can do fp64 at half the rate of fp32.

The tensor calculation is a simple 4x4 multiply with accumulate, so you are performing 16 lots of 4 multiplies but instead of storing the result you add them into the destination with a choice of using fp16 or fp32 for the result.
Posted by Corky34 - Sun 23 Sep 2018 19:20
DanceswithUnix
You would never disable that at manufacture, they are designed like that. Or put it another way, fp circuitry is big so you wouldn't have any idle with only an int part enabled.

The int and fp logic are not the same, but by combining them you save instruction decoding logic etc. So they have decided to spend a few transistors to make use of int instructions being lower latency rather than having them go down the same long pipe as the floating point ops. Sounds like a marginal improvement at best, but I'm sure simulations would have shown an improvement for them to make the change.

So when i read that a CUDA core executes a floating point or integer instruction per clock for a thread Nvidia used to include both FP and INT ALU's in each CUDA ‘core’ (at least the notional definition of a CUDA ‘core’)? In other words a CUDA ‘core’ could only perform one or the other type per clock for a thread.

Since Volta they still have those same separate FP & INT units but because of changes in the way the registers and cache works they can now address both units concurrently? In essence for each clock the notional idea of a CUDA ‘core’ can now run two threads, one for FP and another for INT?

DanceswithUnix
I believe the width of the unit is 64 bits. You do a pair of 32 bit operations, or 4 lots of 16 bit operations. Hence in the fully 64 bit enabled Volta parts they can do fp64 at half the rate of fp32.

You mean the width of the notional idea of what a Tensor ‘core’ is? If so that does seem to make more sense as that would mean all those FP64 ‘cores’ on professional cards have been repurposed for Tensor ‘cores’ on consumer cards.

DanceswithUnix
The tensor calculation is a simple 4x4 multiply with accumulate, so you are performing 16 lots of 4 multiplies but instead of storing the result you add them into the destination with a choice of using fp16 or fp32 for the result.

AFAIK the type (multiply, addition, square root, subtraction) and size (2x2 8bit, 4x4 4bit, 8x8 2bit) can vary as long as each individual TensorRT kernel is constructed from the same type, so you could have one TensorRT kernel doing subtractions on a 2x2 grid constructed out of 4 four digit numbers, another TensorRT doing multiplication on a 4x4 grid of of 16 two digit numbers, and any variation there of that fits into a 16 or 32bit data structure, each of those 16/32bit TensorRT kernels go to make up the NN, at least that's my current understanding.
Posted by ohmaheid - Mon 01 Oct 2018 10:23
I don't see any Hairworks comparisons. Have Nvidia dropped it?