HEXUS Forums :: 243 Comments

Login with Forum Account

Don't have an account? Register today!
Posted by Bagpuss - Thu 02 Mar 2017 14:14
For professional users who need all those cores and threads for rendering and such like these look like bargains.

..for everyone else, especially gamers, quite disappointing from an overclocking and single core performance perspective.

Maybe after a couple of hardware iterations and refinements, allowing for higher overclocks I'd consider going with Ryzen…but not yet.

Now I just have to hope enough people buy them to force Intel into dropping the price of the 7700K a bit more.
Posted by 3dcandy - Thu 02 Mar 2017 14:19
But as a forward looking pc build I'd guess it would be a great chip
Posted by MrRockliffe - Thu 02 Mar 2017 14:20
It looks like Intel still have the lead when it comes to gaming, though it's likely more to do with game optimisation etc more than anything at the this point. What would be interesting is to see what the numbers look like in a few months time, with new bios revisions and new game patches.

Great review as always. Would like to have seen a bigger focus on temps and a comparison chart between it and Intel CPUs to give a better understanding of how hot they run.
Posted by Platinum - Thu 02 Mar 2017 14:30
Not bad at all, cant see a reason now unless you need 10 cores to go with Intels HEDT.
Only dissapointment for me is in the overclocking but its a new process node so not surprising I guess.

AMD are back :)
Posted by el_tone - Thu 02 Mar 2017 14:37
It's good to see some competition in the processor market, however for those looking to build mainly for gaming purposes, Intel's offerings are hard to beat (at least until the Zen 5 series is out?).
Posted by Platinum - Thu 02 Mar 2017 14:38
If you need bleeding edge yea, 8 cores for future proofing might appeal to people more though
Posted by Andy14 - Thu 02 Mar 2017 14:38
Seems to point towards Intel still being king of the hill for gaming. Surely the market for gamers is far greater than that for Streamers, Video Editors, Photo Editors, etc.?

I must be missing something but I've never needed more slower cores that I'd rarely use, and nothing seems to have changed here.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 14:40
Another rushed AMD launch - if you check the reviews thread even in the same games performance from one website to another can be different. They have rushed the CPUs out,with motherboard BIOSes being in a ropey state,obvious optimisation issues,SMT issues in games,etc.

AMD has still not broken away from its “lets rush this out now” and “things will improve in a few months once the software catches up” sort of thing. This is why Intel and Nvidia do so well - AMD has this illness of not launching products which are polished and it really drags them down as a company.

The problem is first impressions count and this a mixed bag. I really worry for Vega now.
Posted by Kanoe - Thu 02 Mar 2017 14:43
One thing I was left asking myself after reading the review was did it live up to the hype and honestly I think it missed in a few areas.

- At least 10 Watts higher at idle than any of the Intel chips tested
- The PiFast result for single thread was down on what I was hoping for
- Warhammer showed better fps with SMT off (as we saw with AMD's CMT, getting anyone to support a particular uarch is not always easy so we can't just assume there will be a game patch to fix it)
- Memory latency was horrible (hopefully something that new BIOS versions can fix and I fully understand its early doors on that)

Obviously the performance for price is the BIG winner here (for most situations).

Hexus, any chance of a kinda bang-4-buck table with the Intel chips? :P
Posted by LSG501 - Thu 02 Mar 2017 14:47
For me personally, I can make use of all the cores, zen is a no brainer over intel unless I'm specifically needing a build that can take 128GB of ram (it's possible but likely pretty rare for me).

However I think I'll wait a while before grabbing one as there seems a few optimisations that need to take place for it to really shine, might even need to wait until the next revision is released etc, as to be honest I'm not really struggling with what I have at the moment.
Posted by DanceswithUnix - Thu 02 Mar 2017 14:50
However, and somewhat interesting to note, switching off the chip's SMT capability increased the average frame rate from 79fps to 85.8fps, suggesting that code is not running efficiently when there's SMT involved.

That seems worthy of an article in its own right.

If true, I can imagine gamers turning the CMT off until there is a scheduler patch for Windows. You still get as many threads as an i7, making life easy for Windows might make things faster.
Posted by Ozaron - Thu 02 Mar 2017 14:53
Kanoe
Hexus, any chance of a kinda bang-4-buck table with the Intel chips? :P

This. +1
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 14:57
DanceswithUnix
That seems worthy of an article in its own right.

If true, I can imagine gamers turning the CMT off until there is a scheduler patch for Windows. You still get as many threads as an i7, making life easy for Windows might make things faster.

From Hardware.fr:

http://i.imgur.com/UFaWvLe.jpg



How could AMD not spot this in internal testing??
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 15:04
Another one:

http://cdn.mos.cms.futurecdn.net/MunKn62CRwenz9EWvEcm9X-650-80.png

Posted by Platinum - Thu 02 Mar 2017 15:05
Maybe they did and there is nothing they can do? Might be a Windows thing that needs patching / optimising for there chips?
Posted by MrJim - Thu 02 Mar 2017 15:09
Is there a way to turn off SMT on a per application basis?
Posted by Platinum - Thu 02 Mar 2017 15:10
Bios thing I think
Posted by Tarinder - Thu 02 Mar 2017 15:11
The only way that I could see was to switch it in the BIOS.

Even then, the BIOS had to be cleared before the switch took effect.
Posted by jimbouk - Thu 02 Mar 2017 15:11
Kanoe
Hexus, any chance of a kinda bang-4-buck table with the Intel chips? :P

+2, was looking for RRPs anywhere in the article for comparison.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 15:16
Platinum
Maybe they did and there is nothing they can do? Might be a Windows thing that needs patching / optimising for there chips?

Well reviewers switched SMT off in some reviews - its hilarious if AMD missed that. I wonder if some of the worst reviews for gaming are done to SMT causing thread stalling.

I mean doesn't AMD write reviewers guides - surely they should have said something about this??

This reminds me of the R9 290X again - a potentially great product which was rushed out half-finished,and yet in the end look how well it aged?? The R9 285 was launched half finished,the Fury X was launched with bugs,the RX480 had issues.

I mean why can't they ever launch anything without some problem??

Intel and Nvidia tend to launch products which on average are more polished.

AMD doesn't and it masks all the very hard work they have done.

Yes,Ryzen should bump AMD CPU sales up quite nicely even for gaming rigs,but many average people will look at these initial reviews and go Intel. AMD is loosing potential sales there.

They really need to get on top of this major issue,otherwise its going to affect the 4C and 6C models which most people are waiting for.

At some point I do need to upgrade this old IB based rig I have,but if AMD really does not get on top of this,I will just get another Core i7 and not bother upgrading for a few years.

I am quite happy to wait a while,but in the end Coffee Lake is will be out at the end of the year or early 2017 so its a limited time-frame indeed.
Posted by scaryjim - Thu 02 Mar 2017 15:18
The problem with all these accurate leaks before hand is that the reviews don't actually have anything new to tell us! it's a huge improvement over Piledriver for single thread but still trails Intel by a bit, and it shines when you load up all the cores.

On the plus side AMD are actually playing in the same division as Intel on single-threaded performance now, even if they're at the other end of it. OTOH that still leaves them trading on “better at heavily threaded workloads” and “cheaper than the alternative” as their two main selling points - much as they have been for the last 6 years….
Posted by Kanoe - Thu 02 Mar 2017 15:18
Is this a potential issue with the task scheduler again like with CMT?

AMD really suffered under the old chips because windows didn't know which cores shared resources so instead of putting light tasks on the second core that shared resources with the first it was simply allocating them out in core number order so the first 4 cores in the system were actually 2 pairs that shared resources causing performance issues whilst the other 2 pairs were mostly idle. Microsoft was going to release a patch for Windows 8 to make this better but it never made it to mainstream.

The above might not be an issue in an SMT architecture though.

Tarinder, did you notice the CPU core utilisation under the Warhammer test? Were some of the cores idle? With SMT off were the cores able to boost to a higher frequency on their own?
Posted by Jace007 - Thu 02 Mar 2017 15:22
Thanks Tarinder nice review. Very decent CPU considering the price and its multi-core performance. Its very nice to see some real competition for Intel. Bummer i should have waited for Zen instead of going for another intel.
Posted by Andy14 - Thu 02 Mar 2017 15:24
CAT-THE-FIFTH
but many average people will look at these initial reviews and go Intel. AMD is loosing potential sales there.

People could take exception to being called average you know ;-)

Surely it's not just following the herd to buy Intel it's the logical thing to do.


Where's the logic on taking a punt on a brand new product that's probably going to have many teething problems unless it's much faster or massively cheaper ?
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 15:25
scaryjim
The problem with all these accurate leaks before hand is that the reviews don't actually have anything new to tell us! it's a huge improvement over Piledriver for single thread but still trails Intel by a bit, and it shines when you load up all the cores.

On the plus side AMD are actually playing in the same division as Intel on single-threaded performance now, even if they're at the other end of it. OTOH that still leaves them trading on “better at heavily threaded workloads” and “cheaper than the alternative” as their two main selling points - much as they have been for the last 6 years….

Look at the Hardware.fr tests - upto a 13% performance regression in games and as usual AMD PR was asleep about all of it.

What did AMD think would happen when reviews happened??

I am honestly getting fed-up that at every AMD CPU and GPU launch there is some bloody problem,and “give it a few months we can fix it” and the competition will use that to get more sales at their expense.

What type of internal testing are they doing?? If review sites in less one bloody week can expose such a big problem,then WTF was AMD doing all these months then??
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 15:29
Andy14
People could take exception to being called average you know ;-)

Surely it's not just following the herd to buy Intel it's the logical thing to do.

  • Established product
  • Less driver issues
  • Established hardware platforms
  • Know what you are going to get
  • And for the overwhelming majority of users (inc. gamers) they are faster.

Where's the logic on taking a punt on a brand new product that's probably going to have many teething problems unless it's much faster or massively cheaper ?

Average Joe or Jane is not considered an insult,its people who are not tech enthusiasts - but the main issue is AMD should have found those SMT issues first. Even in the Hexus and Hardware.fr reviews some of the performance difference due to the SMT issues is making it much worse against the Core i7 6900K than it should have been,and in the Hardware.fr test,without SMT their suite of games would have placed the R7 1800X at around Core i7 4790K level,ie,within 10% of a £1000 Core i7 6900K.

It was the same with the R9 290X - it shipped with one of the worst stock coolers in years,and caused the card to get the “hot and throttling” moniker which Nvidia PR actually managed to push since they actually sent review sites R9 290X cards and said they discovered how under certain settings the cooler had problems.

If AMD had actually made a better cooler,it would have not jinxed the whole R9 290 series line from the start.

Look at how well the R9 390 series rejig did in comparison - they launched with decentish cooling.
Posted by scaryjim - Thu 02 Mar 2017 15:32
CAT-THE-FIFTH
… They really need to get on top of this major issue,otherwise its going to affect the 4C and 6C models which most people are waiting for. …

This may have factored into their decisions about which processors to launch, of course. The higher end chips are more likely to go to workstation users who'll want to load up the threads and won't have these issues.

Cinebench 15 shows that AMD SMT scaling in some tasks is better than Intel's HT scaling, so it must be a particular feature of game engines that causes AMD to lose performance with SMT on.

Ultimately, AMD had to launch some Zen CPUs eventually, and the longer they wait they longer they risk bumping into another Intel spoiler launch. At least Zen is a solid base going forward - let's see how much they can refine it over the next couple of iterations. Intel seem to have optimised Core about as far as it'll go now, so AMD may have a little breathing room to make those tweaks…

EDIT for crosspost:

Andy14
… Surely it's … the logical thing to do. …

  • And for the overwhelming majority of users (inc. gamers) they are faster.

See, that's simply not true. For the vast majority of users the difference would be completely unnoticable.

The vast majority of people use their computers to watch youtube videos, send emails, and update social media. The vast majority of gamers are on cards equivalent to an RX 480 or lower (dig out GPU sales by sector - < 20% is “enthusiast” level cards). They'd be absolutely fine with a Bristol Ridge APU or a mobile Core i U (which are basically low-clocked i3s). Give them a Zen + RX 480 rig and an i7 + RX 480 rig and I'd put money on them not being able to tell the difference.
Posted by meuvoy - Thu 02 Mar 2017 15:34
MrRockliffe
Great review as always. Would like to have seen a bigger focus on temps and a comparison chart between it and Intel CPUs to give a better understanding of how hot they run.

|Gamers Nexus and Kitguru have more in-depth temps charts.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 15:43
scaryjim
This may have factored into their decisions about which processors to launch, of course. The higher end chips are more likely to go to workstation users who'll want to load up the threads and won't have these issues.

Cinebench 15 shows that AMD SMT scaling in some tasks is better than Intel's HT scaling, so it must be a particular feature of game engines that causes AMD to lose performance with SMT on.

Ultimately, AMD had to launch some Zen CPUs eventually, and the longer they wait they longer they risk bumping into another Intel spoiler launch. At least Zen is a solid base going forward - let's see how much they can refine it over the next couple of iterations. Intel seem to have optimised Core about as far as it'll go now, so AMD may have a little breathing room to make those tweaks…

Yes but it should have been documented and even passed onto reviewers - what did they expect would happen??

Its not a one off either - every single AMD CPU and GPU launch in the last few years has had some issue like this,which either AMD internal testing did not find or quietly ignored.

Intel and Nvidia are just exploiting all of this.

People will just read all the launch reviews and that is what they will get from Ryzen - crap for gaming just get Intel.

They need to get on top of this before the 4C and 6C models are released - be honest about the SMT gaming issues FFS.

10% performance drops are huge.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 15:49
BTW,on a side-note did Hexus test the CPU under the high-performance power plan in Windows?? Apparently it adds a bit more performance in games according to some comments I read over on OcUK forums.
Posted by Platinum - Thu 02 Mar 2017 15:55
Looking at these figures though that 32 core Opteron should be an absolute beast
Posted by Platinum - Thu 02 Mar 2017 15:56
CAT-THE-FIFTH
BTW,on a side-note did Hexus test the CPU under the high-performance power plan in Windows?? Apparently it adds a bit more performance in games according to some comments I read over on OcUK forums.

On a desktop, didnt think that option was avalaible?
Posted by Tunnah - Thu 02 Mar 2017 15:59
The clock speed limitation means gamers are better off sticking to Intel. The important thing though is that it crushes the X99 chips for half the price. This is going to give Intel serious pause.

PS I noticed an error in the article:

Going back to SMT, switching it off also increases the Hitman score, from 91.4fps to 65.6, suggesting, once again, that having it active is definitely hindering performance. In fact, running Ryzen in non-SMT mode offers more performance in every scenario, and this is something that AMD needs to be concerned about.

Should it be the other way round ?

EDIT: missed a /
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 16:01
WTF AMD:

http://www.legitreviews.com/amd-ryzen-7-1800x-1700x-and-1700-processor-review_191753/15

Legitreviews guy
You hit the nail on the head on all your comments. I really wish the Windows drivers were ready before we were given the parts to review. Instead we got a statement 24 hours before launch from AMD saying that they'll be coming in 30 days if all goes well. Game optimizations will be hit or miss when they come, but they appear to be coming. That takes time though and we'll see what happens.

I put it in the conclusion on the last page. The quote came direct from AMD's John Taylor. There was talk of it coming with Ryzen 5 and then they said ‘in the next month’ in an official statement that was e-mailed out last night.

What is wrong with you??

You launch when it is obvious games have SMT issues,and now this??

The reviews are already out there - this is the R9 290X MK2.

Posted by Skorne - Thu 02 Mar 2017 16:09
So their £500 8/16 CPU barely competes with a £230 i5 7600k quad core for gaming how bitterly disappointing.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 16:15
Platinum
On a desktop, didnt think that option was avalaible?



That is my desktop. I switch to high performance if I am doing more intensive gaming - most of time I leave it in balanced. Apparently it might help a bit. This is why I asked if Tarinder tried it out to see if it is true or just wishful thinking.
Posted by Corky34 - Thu 02 Mar 2017 16:27
CAT-THE-FIFTH
Another rushed AMD launch - …

I'm inclined to agree what with some sites saying the 1700 doesn't have XFR and others saying it does but to a limited extent…
The non-X CPU does feature XFR, but only to a maximum of 50MHz on a single thread on a single core.

Anandtech


I'm going to guess AMD PR will be to busy to clarify.
Posted by Jonj1611 - Thu 02 Mar 2017 16:33
It seems a solid processor but I was certainly expecting a bit more. One reason why I don't pre-order these days. Will see how it pans out over the next few months.

For note though, anyone using the High Performance setting in Windows, your processor will be at max speed even at idle.
Posted by EvilCycle - Thu 02 Mar 2017 16:35
I am still going through with 1700x purchase, even if the issues with gaming don't ever get fully ironed out, I believe the performance for the price is still well worth the upgrade from the 8350 I currently have!

I do have faith that some driver/bios updates will improve things even if only slightly, and I am pretty sure games will finally make the push further into multi core in the near future too.

Thanks for the review hexus!
Posted by Marcus - Thu 02 Mar 2017 16:37
It will be interesting to see if performance improves over the next few months as the platform matures. Looking forward to the R1700 reviews too. I'm still using an intel Q6600 so I think this will be a big upgrade at a reasonable price for me!
Posted by scaryjim - Thu 02 Mar 2017 16:40
Tunnah
The clock speed limitation means gamers are better off sticking to Intel….

Have we not quashed this one? The vast majority of gamers will be GPU limited.

Look at reviews that use more than one ST benchmark. Across a range of benchmarks, AMD are more like 10% behind in IPC in most tests (most reviews test the 4.1Ghz peak 1800X against the 4.2GHz i3 7350k and i5 7600k). Now consider that AMD's lower level SKUs are all rumoured to clock in at 3.9GHz/4GHz, while (according to the Steam Survey) only 4% of gamers use an Intel CPU with clockspeed of 3.7GHz+, and only 20% use an Intel at 3.3GHz+.

That makes 80% of gamers using a < 3.3Ghz Intel processor, or an AMD processor.

So don't tell me that Zen's “clock speed limitation” means gamers are better off sticking to Intel.
Posted by imadaily - Thu 02 Mar 2017 16:58
I think coming from a Q6600 you'll be happy with the increase in performance compared to more or less any mid range or above processor on the market today.
The platform improvements should also help quite a bit, SATA2 to SATA3/NVMe should make things feel a bit snappier :)

scaryjim
That makes 80% of gamers using a < 3.3Ghz Intel processor, or an AMD processor.

So don't tell me that Zen's “clock speed limitation” means gamers are better off sticking to Intel.

This may be true but the gamers using <3.3GHz intel processors aren't going to be the people who are spending £320 on a new CPU!
Posted by scaryjim - Thu 02 Mar 2017 17:09
imadaily
… This may be true but the gamers using <3.3GHz intel processors aren't going to be the people who are spending £320 on a new CPU!

That's my point. The majority of gamers don't need a fast processor, or one with 8C/16T. Ryzen 7 chips aren't really aimed at gamers. The Ryzen 5 and Ryzen 3 quad cores are the ones that will sit in the same market as the slower Intel chips, and they're the ones that will target the majority of the gamer market. They'll be priced the same as those slower Intel chips, but they'll probably be faster in single threaded tests, because of their higher clock speed.

So saying it's better for gamers to buy Intel because Intel's highest clocked £300+ processor is faster than AMD's £329 processor with twice as many cores is a futile argument. Neither of those products address the needs of the vast majority of gamers.
Posted by Tunnah - Thu 02 Mar 2017 17:21
CAT-THE-FIFTH
WTF AMD:

http://www.legitreviews.com/amd-ryzen-7-1800x-1700x-and-1700-processor-review_191753/15





What is wrong with you??

You launch when it is obvious games have SMT issues,and now this??

The reviews are already out there - this is the R9 290X MK2.


Personally I don't see it as that big of a deal, the difference is minimal, and if it's something that can be patched in then why not release em ? It's not a product breaker

scaryjim
Have we not quashed this one? The vast majority of gamers will be GPU limited.

Look at reviews that use more than one ST benchmark. Across a range of benchmarks, AMD are more like 10% behind in IPC in most tests (most reviews test the 4.1Ghz peak 1800X against the 4.2GHz i3 7350k and i5 7600k). Now consider that AMD's lower level SKUs are all rumoured to clock in at 3.9GHz/4GHz, while (according to the Steam Survey) only 4% of gamers use an Intel CPU with clockspeed of 3.7GHz+, and only 20% use an Intel at 3.3GHz+.

That makes 80% of gamers using a < 3.3Ghz Intel processor, or an AMD processor.

So don't tell me that Zen's “clock speed limitation” means gamers are better off sticking to Intel.

We've not quashed anything and I gotta say mate you have a real crappy attitude over this

I'm saying it because for a long while yet games are going to be limited to 4 cores being used, and quite a few games will benefit from extra speed. I'm not going off the charts here I'm going off the personal experience of moving from a speed limited CPU to an unlocked one - SO MANY games had a real tangible boost in performance, with Civ VI in particular giving me an insane amount of extra performance.

The speed of the cores matters quite a lot. But I'm thinking the 4 core parts might actually be faster - it's typical of many core parts having to run at lower speeds, hopefully the 4 core parts can do 4.5ghz
Posted by Percy1983 - Thu 02 Mar 2017 17:21
Look good to me last time around the AMD chips where slightly better for multi thread stuff and massively behind on gaming hence my 3570k, this time around the gaming is close enough for me and will the multi thread stuff will fly away.

Zen is coming to a PC near me.
Posted by imadaily - Thu 02 Mar 2017 17:29
scaryjim
So saying it's better for gamers to buy Intel because Intel's highest clocked £300+ processor is faster than AMD's £329 processor with twice as many cores is a futile argument. Neither of those products address the needs of the vast majority of gamers.

Also true but I think it's still currently* better for gamers to buy an Intel CPU.
I've not seen any benchmarks with an i5-6600k but I'm going to guess that it performs similarly** to the r7 1700 which is £100 more.

*Things will hopefully change with the release of the r3 and r5,
**To the point where an average person won't notice the difference in an average game from around now

Ignoring gaming now: It would be amazing to have r7 1700s in the labs at uni for running CAD/CFD/FEA simulations, I'm certain it'd knock off a LOT of time compared to the i5 and i7s currently in use!
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 17:34
Tunnah
Personally I don't see it as that big of a deal, the difference is minimal, and if it's something that can be patched in then why not release em ? It's not a product breaker

Because the problem is even 10% here or there makes a big difference - look at card launches,etc where the same thing plagues AMD. Like I said that SMT bug/optimisation issue cost them 10% in the Hardware.fr review - which is big. Its the difference in the Hardware.fr review of the R7 1800X matching a Core i7 4790K or Core i7 5820K and being within 10% of a Core i7 6900K to the latter being 20% faster and the R7 1800X be the same as a Core i7 3770K in performance.

So if people see AMD matching a high clocked Haswell part,its far more positive than a SB/IB CPU from years ago - you are starting to see people laughing that 6 year old Intel CPUs are doing well against it.

AMD is just giving more chances to Intel in a growing segment.

The whole issue,is that its enough for say an Intel CPU to get 80FPS on a £1000 card and an AMD one to get 70FPS for it to look fail,especially since it will be an older Intel CPU getting that,since realistically we have not seen a big change in single core IPC over 5 years.

Remember once you go over £200+ you are entering the enthusiast area where people will be comparing percentages.

Also I doubt not having proper windows support is a small problem - it means proper scheduling support too,or even proper power control of the CPUs under Windows,and its why some people mentioned performance was improved using a high performance profile(might be not true).

You need to realise Intel CPUs have had years of support as they are gradual improvements - Ryzen is a new core,and AMD achieves nothing by launching it earlier and making it look worse than it should.

AMD always does this - the R9 290X due to its crap cooler,made it lose performance,and look hot and noisy against Nvidia cards and often the drivers are not quite there. The same as the RX480 which meant Nvidia probably got some extra sales. Intel and Nvidia might have some bugs during their launches but as a whole seem to just be better at doing smoother ones.

Toms Hardware said the same thing:

It's a bummer the Ryzen launch was so clearly rushed. We expected AMD to have a better explanation for its gaming performance, but all of the feedback we received from the company came very last-minute. It's hard to imagine these shortcomings weren't discovered previously and diagnosed more thoroughly. We're happy to put in the time and effort, though. Expect more information as it becomes available.

The problem is this enforces the whole Intel/Nvidia duality against them.
Posted by Tunnah - Thu 02 Mar 2017 17:41
CAT-THE-FIFTH
Because the problem is even 10% here or there makes a big difference - look at card launches,etc where the same thing plagues AMD. Like I said that SMT bug/optimisation issue cost them 10% in the Hardware.fr review - which is big. Its the difference in the Hardware.fr review of the R7 1800X matching a Core i7 4790K or Core i7 5820K and being within 10% of a Core i7 6900K to the latter being 20% faster and the R7 1800X be the same as a Core i7 3770K in performance.

So if people see AMD matching a high clocked Haswell part,its far more positive than a SB/IB CPU from years ago - you are starting to see people laughing that 6 year old Intel CPUs are doing well against it.

AMD is just giving more chances to Intel in a growing segment.

The whole issue,is that its enough for say an Intel CPU to get 80FPS on a £1000 card and an AMD one to get 70FPS for it to look fail,especially since it will be an older Intel CPU getting that,since realistically we have not seen a big change in single core IPC over 5 years.

Remember once you go over £200+ you are entering the enthusiast area where people will be comparing percentages.

Also I doubt not having proper windows support is a small problem - it means proper scheduling support too,or even proper power control of the CPUs under Windows,and its why some people mentioned performance was improved using a high performance profile(might be not true).

You need to realise Intel CPUs have had years of support as they are gradual improvements - Ryzen is a new core,and AMD achieves nothing by launching it earlier and making it look worse than it should.

AMD always does this - the R9 290X due to its crap cooler,made it loose performance,and look hot and noisy against Nvidia cards and often the drivers are not quite there. The same as the RX480 which meant Nvidia probably got some extra sales. Intel and Nvidia might have some bugs during their launches but as a whole seem to just be better at doing smoother ones.

Toms Hardware said the same thing:



The problem is this enforces the whole Intel/Nvidia duality against them.

Yeah I get what you're saying and it's a valid grievance, but I genuinely don't believe it'll hurt em. They needed to get this product out, they needed to show they can pee all over Intel chips at half the price. These issues will be ironed out but honestly this was the best way. Instead of releasing a patched CPU in several months to make everyone happy, the vast majority of folks who don't care can get 1 now, and have a performance increase when it is sorted.

They need to have the products out for them to be patched anyway, so the problems can be found by a much wider audience than the QC department.

EDIT: Also the enthusiast market isn't as clear cut as that. Personally if I hadn't just bought a 2700K, I'd be getting one of these, simply because they're offering such a hefty performance for half the price of the Intel model. Hell, depending on the speed of the 4c models, if they can boost higher, I might still get one. It is the clock speed which is keeping me on the 2700K, if the other parts come in at same speed, it'll be performing better at IPC due to it being a newer chip, and I want a motherboard with more up to date features, so it'll be a no brainer for me.

The reason I didn't get a 6700K was because I was looking at £600 just to boost the CPU and get M2. But if the 4c Ryzen is any good, it'll no doubt be around £200, the motherboards tend to be cheaper, and…OK I'll still have to buy DDR4 no getting away from that. But I'll be looking at £400 vs £600 for better performance. And even if the 6700K is better slightly, hopefully it'll be a low enough difference as to not make a real world dent
Posted by Tabbykatze - Thu 02 Mar 2017 17:50
Tunnah
They need to have the products out for them to be patched anyway, so the problems can be found by a much wider audience than the QC department.

This, is why this launch happened. It wasn't rushed (to a degree), they needed to get it out in the wild so that these problems can be found. QA can only take you so far. I'm a Sophos Architect and the amount of products they release with a plethora of interesting bugs and hindrances makes me turn my head. I'm forever creating tickets from my installs and feeding information to the appropriate persons on what is wrong and where.

But that doesn't diminish the power and value of the product, technology is so unbelievably complex these days that even with exhaustive regression testing and QA that there are vast areas that just can't be caught in time for release when you have shareholders and people with their own personal moneys riding on what you do. Sometimes you've just got to do it and fix it when people find these issues to find out where to focus on.

Using Sophos again as an example, I was the first XG Architect in the UK and v15 of the new Firewall appliance is cutely labelled “as the child we don't talk about”. It was bloody awful to work with but it was still powerful and only by it being out in the wild did the majority of the issues get ironed out within a financial year and we got v16 which was very good. Now it wasn't the best but v17 is almost ready to drop and only by 1,000s of installs and people using it will it become something that will make me go “the older SG which has an established 10 years behind it is no longer the appliance of choice”.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 17:52
Tunnah
Yeah I get what you're saying and it's a valid grievance, but I genuinely don't believe it'll hurt em. They needed to get this product out, they needed to show they can pee all over Intel chips at half the price. These issues will be ironed out but honestly this was the best way. Instead of releasing a patched CPU in several months to make everyone happy, the vast majority of folks who don't care can get 1 now, and have a performance increase when it is sorted.

They need to have the products out for them to be patched anyway, so the problems can be found by a much wider audience than the QC department.

Thats the thing - people who want a change,or find a niche that these CPUs will excel in will buy them. But what about everyone else - the RX480 started strong and then Nvidia launched the GTX1060 with better cooling,etc before AMD could and stole some of their thunder.

Its just annoying to know so many gaming benchmarks out there are probably a good 5% to 15% lower than they should off just because someone at AMD forgot to test their CPUs in games with no SMT on.

Now you have this response from AMD saying they need better optimisation by devs for their CPU in games:
https://www.pcper.com/news/Processors/AMD-responds-1080p-gaming-tests-Ryzen


What they don't say is why they released their CPUs in an unsupported state under windows or ignore the fact they knew very well SMT was leading to reductions in gaming performance.

How could they just not be honest and say “please test games with SMT off,as some games might have performance oddities with SMT enabled due to unfinished support under windows CURRENTLY”.

I just cannot understand what the heck they were thinking??

AMD apparently does not want to launch any of its CPUs or GPUs in the last 5 years with the best possible performance.

Compare that to Nvidia and to a slightly lesser degree Intel.
Posted by gordon861 - Thu 02 Mar 2017 17:54
So it is looking (at the moment) that as expected the AMD chips will be close to the Intel chips but at a lower price. But is that lower price going to be enough?

At the moment I can buy an i5-7600k for £220 from Scan, the AMD ‘equal’ will be cheaper but I doubt it'll be half the price, may 2/3 the price so about £150. Is that £70 enough for me to jump from Intel to AMD for gaming?

I don't think so, based on my existing i5-2500k that I have had since they were launched and is still going strong. From past experience I doubt that the AMD chip will still be competitive for as many years as the Intel, so spend the bit extra now and it will probably end up costing me £20 a year for four years.

I will watch how things develop over the next few months before pulling the trigger on a new PC but I think it's still gonna be an Intel.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 17:56
Tabbykatze
This, is why this launch happened. It wasn't rushed (to a degree), they needed to get it out in the wild so that these problems can be found. QA can only take you so far. I'm a Sophos Architect and the amount of products they release with a plethora of interesting bugs and hindrances makes me turn my head. I'm forever creating tickets from my installs and feeding information to the appropriate persons on what is wrong and where.

But that doesn't diminish the power and value of the product, technology is so unbelievably complex these days that even with exhaustive regression testing and QA that there are vast areas that just can't be caught in time for release when you have shareholders and people with their own personal moneys riding on what you do. Sometimes you've just got to do it and fix it when people find these issues to find out where to focus on.

Using Sophos again as an example, I was the first XG Architect in the UK and v15 of the new Firewall appliance is cutely labelled “as the child we don't talk about”. It was bloody awful to work with but it was still powerful and only by it being out in the wild did the majority of the issues get ironed out within a financial year and we got v16 which was very good. Now it wasn't the best but v17 is almost ready to drop and only by 1,000s of installs and people using it will it become something that will make me go “the older SG which has an established 10 years behind it is no longer the appliance of choice”.

That is not the point - the SMT performance regressions in games were found within a few days of testing by reviewers. Surely AMD knew about this??

How long does it take AMD to run a few games with SMT on and off to just verify its working fine??

Seriously??



Its shoddy testing - its not one or two games which are obscure. That is 8 well known games there including BF1 and W3 which are popular titles.This is not like the TLB which was not so evident.

What type of testing were they doing in their labs??

Do AMD want to sell their products or find new ways for Intel to sell less Ryzen CPUs for them??

Edit!!

That only leads to the perception AMD is for tweakers and for everybody else just get Intel/Nvidia as its less of a “problem”.

We as enthusiasts might wait,but the problem is AMD by not launching their products in a polished way,is making them look worse,and hence they need to cut prices to sell their products.

Why do you think Nvidia has traditionally charged more for their cards and got away with it even if they are no better??

AMD screws up launches 9/10 in some way that their competition can breathe easier.

Intel is probably slightly less worried now,since AMD just threw them a bone.
Posted by Tabbykatze - Thu 02 Mar 2017 18:01
CAT-THE-FIFTH
That is not the point - the SMT performance regressions in games were found within a few days of testing by reviewers. Surely AMD knew about this??

How long does it take AMD to run a few games with SMT on and off to just verify its working fine??

Seriously??



Its shoddy testing - its not one or two games which are obscure. That is 8 well known games there including BF1 and W3 which are popular titles.

This is not like the TLB which was not so evident.

Don't get me wrong, I absolutely agree with you. But according to that legitreviews review of the 1800x, in the week of testing they were being fed new updates and points and things to do. Now, if AMD continues that trend then they can quite quickly ramp it up.

Now I don't know the relationship that AMD has with Microsoft regarding OS optimisation but it obviously looks like Intel/Nvidia have a better relationship in getting this stuff out. The other issue is that this is a completely new architecture and a behemoth as large as Windows needs massively exhaustive testing before they release anything in this area. On top of this, Intel's architecture hasn't changed much relatively in the past 5 years.

So you have a mature supported architecture vs. an immature architecture that was only finalised at the start of Q1 2017. So I'm reserving my spear judgment of crucifying AMD because some of this is nothing to do with AMD's responsibility.
Posted by Tunnah - Thu 02 Mar 2017 18:05
gordon861
So it is looking (at the moment) that as expected the AMD chips will be close to the Intel chips but at a lower price. But is that lower price going to be enough?

At the moment I can buy an i5-7600k for £220 from Scan, the AMD ‘equal’ will be cheaper but I doubt it'll be half the price, may 2/3 the price so about £150. Is that £70 enough for me to jump from Intel to AMD for gaming?

I don't think so, based on my existing i5-2500k that I have had since they were launched and is still going strong. From past experience I doubt that the AMD chip will still be competitive for as many years as the Intel, so spend the bit extra now and it will probably end up costing me £20 a year for four years.

I will watch how things develop over the next few months before pulling the trigger on a new PC but I think it's still gonna be an Intel.

Ya but there are a LOT of folks out there who are going to see a 4c/8t part costing considerably less than a 4c/4t part from Intel and it'll be a no brainer. Sure you're not going to fork out to upgrade to a part that doesn't give you much performance, but there's a HELL of a lot of people out there with Q6600s, or even i3s that aren't cutting it for games.
Posted by Tunnah - Thu 02 Mar 2017 18:07
CAT-THE-FIFTH
That is not the point - the SMT performance regressions in games were found within a few days of testing by reviewers. Surely AMD knew about this??

How long does it take AMD to run a few games with SMT on and off to just verify its working fine??

Seriously??


Its shoddy testing - its not one or two games which are obscure. That is 8 well known games there including BF1 and W3 which are popular titles.This is not like the TLB which was not so evident.

What type of testing were they doing in their labs??

Do AMD want to sell their products or find new ways for Intel to sell less Ryzen CPUs for them??
They probably didn't admit it on purpose, you don't want reviewers going into it with an issue at the forefront of their mind

Also they probably want to downplay it, make it out like sure they know about it, but it's no biggie and will be sorted out anyway.
Posted by flearider - Thu 02 Mar 2017 18:08
as has been made clear by so many reviews this is not a gamers cpu .. yes it's just ok .. but there pushing it in a different way to those professionals out there that are hard struck by the down turn in the economy ..wanna get more done for less ££$$$ choose a ryzen .. imo it's a good play there hitting 2 segments at once ..
so it will game just fine @4-4.2ghz under water with 3200-3600 ddr4 .. now what I want to see is what the 1700x will do .. cause i'm not spending £100 more for very little oc ..
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 18:25
Tabbykatze
Don't get me wrong, I absolutely agree with you. But according to that legitreviews review of the 1800x, in the week of testing they were being fed new updates and points and things to do. Now, if AMD continues that trend then they can quite quickly ramp it up.

Now I don't know the relationship that AMD has with Microsoft regarding OS optimisation but it obviously looks like Intel/Nvidia have a better relationship in getting this stuff out. The other issue is that this is a completely new architecture and a behemoth as large as Windows needs massively exhaustive testing before they release anything in this area. On top of this, Intel's architecture hasn't changed much relatively in the past 5 years.

So you have a mature supported architecture vs. an immature architecture that was only finalised at the start of Q1 2017. So I'm reserving my spear judgment of crucifying AMD because some of this is nothing to do with AMD's responsibility.

My main gripe is that with SMT off,Ryzen is faster in gaming,and AMD has essentially made its CPU looking terribad in many reviews,with a Core i7 3770K being close to it in some of them. This is doing damage to AMD in the gaming community,and first impressions count. This is a continuation of almost all their launches in the last 5 years - they really need to get on top of this,and have to be realistic and honest about things.

They can't afford these sort of issues,and like I said its a simple fix - they should have tested it and told reviewers that SMT might have some issues in gaming.

Tunnah
They probably didn't admit it on purpose, you don't want reviewers going into it with an issue at the forefront of their mind

Also they probably want to downplay it, make it out like sure they know about it, but it's no biggie and will be sorted out anyway.

But its not helping since instead of people thinking AMD is aware of it and trying to get support out,it looks like some “bug” AMD is not aware off discovered by reviewers.
Posted by Tabbykatze - Thu 02 Mar 2017 18:35
CAT-THE-FIFTH
My main gripe is that with SMT off,Ryzen is faster in gaming,and AMD has essentially made its CPU looking terribad in many reviews,with a Core i7 3770K being close to it. This is doing damage to AMD in the gaming community,and first impressions count. This is a continuation of almost all their launches in the last 5 years - they really need to get on top of this,and have to be realistic and honest about things.

They can't afford these sort of issues,and like I said its a simple fix - they should have tested it and told reviewers that SMT might have some issues in gaming.

True, but then you have to remember the i7 3770k is also close to a 7700k ;)

If we use the cpu benchmark mega only showing desktop cpus then ordered by CPU Mark, there are 3rd generations rubbing shoulders with 7th generations. Now what does that read like to you?

AMD have an extremely good platform that's just been released, it does have kinks and SMT optimisation is not a “deal breaking” kink, far from it. Some of these things are quite odd, but issues like this aren't unexpected from something so new.

I want to see what the first 30 days give, I want to see a review in 30 days time and I want to see what changes. These reviews have almost made me push the button but I'm going to sleep on it right now. i've been AMD for 10 years and I want to replace my 8350 and Ryzen is what I want.

Edit: I also don't believe you understand the gravity of how a processor architecture is designed. “Resolving SMT is simple fix”? AMD have created SMT, it is up to Microsoft to optimise how it uses it in it's abstraction layer
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 18:46
Tabbykatze
True, but then you have to remember the i7 3770k is also close to a 7700k ;)

If we use the cpu benchmark mega only showing desktop cpus then ordered by CPU Mark, there are 3rd generations rubbing shoulders with 7th generations. Now what does that read like to you?

AMD have an extremely good platform that's just been released, it does have kinks and SMT optimisation is not a “deal breaking” kink, far from it. Some of these things are quite odd, but issues like this aren't unexpected from something so new.

I want to see what the first 30 days give, I want to see a review in 30 days time and I want to see what changes. These reviews have almost made me push the button but I'm going to sleep on it right now. i've been AMD for 10 years and I want to replace my 8350 and Ryzen is what I want.

Edit: I also don't believe you understand the gravity of how a processor architecture is designed. “Resolving SMT is simple fix”? AMD have created SMT, it is up to Microsoft to optimise how it uses it in it's abstraction layer

I think you misread what I said by a simple fix - the simple fix is for AMD to have told reviewers to test games with both SMT enabled and disabled. There is NOTHING complex about putting that in their reviewers guides.

The simple fix would be for AMD to be honest and say MS was having some delays in getting drivers and scheduler updates out. The simple fix would be them saying games needed updates to handle the AMD implementation of SMT.

They need to be honest about these problems - not hide them.

They know very well hiding problems has blown up in their face and they don't have the sway of Intel or Nvidia to bury them.

It shows you either AMD has not done proper internal testing or has and was trying to hide it to save face.

Saving face is irrelevant when your CPU is scoring upto 15% lower scores in games due to SMT issues.

If you look at it objectively why would you want to get a Ryzen 7 1700 or 1700X as a gamer if you just looked at many of the reviews??

We are enthusiasts,so we can kind of think its plausible for AMD to get another 10% maybe 15% out of the CPU once devs start getting to grips with optimisation,or by simply switching off SMT.

Sadly,most reviews won't mention any of that.
Posted by watercooled - Thu 02 Mar 2017 18:51
Kanoe
One thing I was left asking myself after reading the review was did it live up to the hype and honestly I think it missed in a few areas.
- At least 10 Watts higher at idle than any of the Intel chips tested
Apart from the equivalent socket 2011 processors you mean?
Kanoe
- The PiFast result for single thread was down on what I was hoping for
Does anyone care?
Kanoe
- Warhammer showed better fps with SMT off (as we saw with AMD's CMT, getting anyone to support a particular uarch is not always easy so we can't just assume there will be a game patch to fix it)
This is one to watch I agree, but I don't think it's comparable to the issues AMD had with CMT support - SMT is fairly well-implemented in Windows now. Intel had problems pretty much the same as this a few years ago, so it could just be a case of Windows Update providing an update to the scheduler to properly recognise Ryzen's capabilities.
Kanoe
- Memory latency was horrible (hopefully something that new BIOS versions can fix and I fully understand its early doors on that)
As a synthetic benchmark it's… pretty meaningless TBH. According to the Hexus benchmark they're beating Intel across the board in terms of bandwidth with 2 channels of memory, and they quite clearly stated the unoptimised nature of the benchmark. This also seems to be what people were referring to with, like you say, last-minute BIOS patches yesterday, so I wonder if that had anything to do with it?
Andy14
Surely it's not just following the herd to buy Intel it's the logical thing to do.
As others have said that very much depends what you're doing.
Andy14
  • Established product
  • Less driver issues
  • Established hardware platforms
  • Know what you are going to get
Because Intel never stumbles on releases with product recalls, deteriorating chipsets, awful GPU drivers, etc? You don't ‘know what you are going to get’ with Intel any more than anyone else TBH - choosing them is no guarantee you won't have problems.
Andy14
  • And for the overwhelming majority of users (inc. gamers) they are faster.
Where's the logic on taking a punt on a brand new product that's probably going to have many teething problems unless it's much faster or massively cheaper ?
Well… they are massively cheaper? And much faster in many workloads too. As above, Intel have as many teething problems as anyone else, that's not sensible logic.
Posted by Tabbykatze - Thu 02 Mar 2017 18:54
CAT-THE-FIFTH
I think you misread what I said by a simple fix - the simple fix is for AMD to have told reviewers to test games with both SMT enabled and disabled. There is NOTHING complex about putting that in their reviewers guides.

The simple fix would be for AMD to be honest and say MS was having some delays in getting drivers and scheduler updates out.

They need to be honest about these problems - not hide them.

They know very well hiding problems has blown up in their face and they don't have the sway of Intel or Nvidia to bury them.

It shows you either AMD has not done proper internal testing or has and was trying to hide it to save face.

Saving face is irrelevant when your CPU is scoring upto 15% lower scores in games due to SMT issues.

Ah, I see what you're getting at.

But I do agree with what someone else said in this thread, why tell upfront when all you're going to do is cause undue panic. Or the flip side of the coin, tell people and people are “nice” in their reviews. Or just let it carry on as normal.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 19:02
Tabbykatze
Ah, I see what you're getting at.

But I do agree with what someone else said in this thread, why tell upfront when all you're going to do is cause undue panic. Or the flip side of the coin, tell people and people are “nice” in their reviews. Or just let it carry on as normal.

But the problem is like hiding a dirty family secret its different when you get on top of it,instead of somebody “finding it out”.

It looks like review sites have found a “bug” in Ryzen regarding SMT and gaming and many sites are not aware of the SMT issues in games,so are showing lower than normal performance in games.

OTH,if they actually see the tree for the woods,and get ahead of the curve on it they could manage it. Plus so many people pre-ordered Ryzen for gaming,expecting it would reasonably competitive even if it did not beat Intel,but the issue is in certain reviews(who seem to be oblivious to the issue),they tested with SMT on meaning performance could regress to IB levels in certain reviews.

IB levels at the worst end are still perfectly fine but,the issue it sounds like AMD Ryzen is more like a 5 to 6 year old than a newer one and people will subconsciously link the two.

Edit!!

The Hexus review is fairly solid as they are seasoned enough to test whether SMT was having an issue but lots of reviewers nowadays don't really bother checking these sorts of things too.
Posted by Tabbykatze - Thu 02 Mar 2017 19:06
CAT-THE-FIFTH
But the problem is like hiding a dirty family secret its different when you get on top of it,instead of somebody “finding it out”.

Well, I think that's taking it a bit far but I somewhat agree.

The Hexus review is the best I've seen thus far. What I do find interesting is the fact that AMD falls behind in 1080p but locksteps at higher resolutions.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 19:20
Tabbykatze
Well, I think that's taking it a bit far but I somewhat agree.

The Hexus review is the best I've seen thus far. What I do find interesting is the fact that AMD falls behind in 1080p but locksteps at higher resolutions.

Yes,I do tend to use colourful metaphors at times! The issue why I am so annoyed it is a solid CPU,but AMD generally launches solid enough products….but with a tendency to be somewhat slightly undercooked at launch and I would think that after so many launches where the same thing happened again and again,that they would learn by now.

I am really worried about Vega now - I really hope AMD launches the RX Vega in a polish manner,but I am just getting flashbacks to Fiji and Hawaii where they made 99% of the effort and it was that final 1% which did them in. Its what I feel about this launch.
Posted by QuorTek - Thu 02 Mar 2017 19:29
It is a bargain that gives intel a run for their money :)
Posted by imadaily - Thu 02 Mar 2017 19:37
@CAT-THE-FIFTH
Polishing products takes time and money, the further you want something polished the harder it is to polish it more.
AMD are cash and resource limited, they can't afford to spend too long doing it before release.

As for the SMT reducing performance I think that AMD telling me that the 8C/16T processor I just (Hypothetically) purchased will work best if I reduce it down to a 8C/8T processor would be a massive slap in the face.

As far as I know, enabling and disabling SMT is not a per application can be changed on the fly setting it's a ‘reboot and change it in BIOS’ setting.
I think it's much better for people to just get on with using it as a 8C/16T CPU and then down the line when Microsoft/AMD/Game devs work things out then you get a nice extra boost.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 20:11
imadaily
@CAT-THE-FIFTH
Polishing products takes time and money, the further you want something polished the harder it is to polish it more.
AMD are cash and resource limited, they can't afford to spend too long doing it before release.

As for the SMT reducing performance I think that AMD telling me that the 8C/16T processor I just (Hypothetically) purchased will work best if I reduce it down to a 8C/8T processor would be a massive slap in the face.

As far as I know, enabling and disabling SMT is not a per application can be changed on the fly setting it's a ‘reboot and change it in BIOS’ setting.
I think it's much better for people to just get on with using it as a 8C/16T CPU and then down the line when Microsoft/AMD/Game devs work things out then you get a nice extra boost.

Its the first impressions which are important and so many major sites like Ars Technica are testing with SMT enabled which reduces performance - they literally are saying buy a Core i7 7700K. If those scores were between 5% to 15% higher it might change the tone of some reviews.

First impressions do count and the issue is AMD does want to convert all the less techy types who would only buy Intel to buy AMD. They are unlikely to be on forums or read comments,etc.

The problem is with SMT on,the performance drops are enough to crash performance down by upto 15% in games and you are still getting an 8 core CPU for between £320 to £500. The whole issue is we on forums might look at this think,yeah it will get better.

OTH,plenty of reviews don't mention this or test with SMT off,which makes performance look a bit meh,and it also means people might think the 4C/6T ones will be the same and just ignore Ryzen in its first iteration.

Its important that it is put out there,that the Ryzen SMT ability is not fully supported by games,and look at the 8C/8T performance - its quite reasonable. Games also generally tend to scale with cores than with virtual threads.

Review sites need to be aware of this for 4C/8T and 6C/12T parts before they are reviewed if there are not any fixes in the next few months for the performance regressions in many games.

But for me it really means I am not sure I am going to get Ryzen.

I have a Xeon E3 1230 V2/Core i7 3770 which I want to upgrade at some point. The 4C/8T Ryzen is meant to be between £175 to £200 and the 6C/12T one is £260.

The issue is that the 4C/8T one is now going to have to go core for core with a Core i5 7500 or 7600 and the latter has a clockspeed and IPC advantage.

AMD is a bit screwed in that area now - the Core i5 7600>>>Ryzen R5 1400X since the SMT on Ryzen does nothing for gaming.

It makes my plan of probably getting a 4C/8T Ryzen pointless now,since it will be a downgrade over my 4C/8T IB Core i7 since the SMT is close to useless in gaming. It will have to be the 6C one,but then I can get a Core i7 7700 for not much more and since I am a mini-ITX fan,Intel is better served in that area generally.

At this rate I might as well see how Coffee Lake fares.
Posted by malfunction - Thu 02 Mar 2017 20:34
It looks like another scheduler type issue similar to bulldozer - windows / apps and games will need to know how to schedule / which core(s) to pick (affinity). Some theory here (if google translate is working):

http://www.hardware.fr/articles/956-22/retour-sous-systeme-memoire.html
Posted by Phage - Thu 02 Mar 2017 20:39
Hi Cat
I'm not so budget limited and rip a lot of discs to play when I'm on the road. I'm going to wait a month and see if things improve in the next 6 weeks, as that's when I'm expecting my tax refund.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 20:40
malfunction
It looks like another scheduler type issue similar to bulldozer - windows / apps and games will need to know how to schedule / which core(s) to pick (affinity). Some theory here (if google translate is working):

http://www.hardware.fr/articles/956-22/retour-sous-systeme-memoire.html

Will need to see if the first set of windows drivers/patches AMD said will drop in a few weeks will make a difference - I really hope by the time the 4C/8T models drop we still don't have negative SMT scaling in games,otherwise the only way they will compete is to have the max 4C/8T SKU at £175,as the Core i5 7500 will be quicker in games. Thank goodness for AMD,Intel blocked BCLK overclocking with Kaby Lake.
Posted by scaryjim - Thu 02 Mar 2017 20:40
Tunnah
… We've not quashed anything and I gotta say mate you have a real crappy attitude over this …

Hm, maybe that did come over as a bit angsty. OTOH

Tunnah
… I'm saying it because for a long while yet games are going to be limited to 4 cores being used …

We're already seeing a number of games benefit from more than 4 cores - or at least more than 4 threads. That number will continue to increase.

Tunnah
… quite a few games will benefit from extra speed. …

Hmmm, here's where your assertion gets tricky.

Firstly, there's the assumption that Intel *has* extra speed when you look at price equivalent chips. The vast majority of gamers won't buy Ryzen 7 or a Core i7 7700k. Look to the mainstream chips - Ryzen quads top out at $199, according to the so-far-very-accurate rumour mill, and that's for a 4C/8T chip with 3.9GHz turbo and XFR. At that price point Intel has multiplier-locked Core i5s with much lower clock speeds. Based on the comparison between the Ryzen 7 chips and the 7350k/7600k, they're close enough that a 5% - 10% clock speed advantage could actually make them faster in a lot of tests.

Secondly, there's the assumption that the CPU is a meaningful bottleneck. For benchmarking at 1080p with a GTX 1080 strapped in, sure the CPU is going to make a measurable difference. But a) that isn't what most people will be running, and b) a measurable difference isn't necessarily a noticable difference. The most powerful GPU used in my family is an RX 460, and we all play games - a faster CPU would do little to change our 1080p gaming experience.

So, we're looking at a situation where for most people their GPU is more likely to be a bottleneck than their CPU, and where at a given price point - unless Intel start hacking their list prices - it looks likely that AMD will provide roughly similar single threaded performance with either more cores or more threads. And that's without even considering overclocking - which AMD will be offering on all SKUs while Intel reserve it for only the most expensive chip in each range.

That's why I don't get why everyone is still touting the “Intel's the only choice for gamers” line. Everything that's been published today points to the sub £200 market being VERY competitive once Ryzen 3 & 5 launch.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 20:46
SJ,the problem is the 4C/8T SKUs are utterly screwed now - they need to get SMT sorted out in the next few months to work reliably with gaming,otherwise at £175 to £200 its going to be literally 4C Ryzen against 4C KL,and Intel will win that one.

If they can't get SMT to work reliably in games in the next few months,I think they should not bother with the 4C/8T version and just launch 4C Ryzen CPUs(it should be easier to get more to qualify as a 4C/4T SKU) and the 6C/12T at £220 to £260.

At least the people buying a 6C CPU will probably also want to do stuff other than gaming and it will be Intel 4C vs 6C just like the good old Core i5 750 vs the Phenom II X6 1055T.


Phage
Hi Cat
I'm not so budget limited and rip a lot of discs to play when I'm on the road. I'm going to wait a month and see if things improve in the next 6 weeks, as that's when I'm expecting my tax refund.

Thats the thing for that kind of stuff,Ryzen 7 looks pretty good TBH. Its where Intel will find it much harder to compete in with their quad cores.
Posted by scaryjim - Thu 02 Mar 2017 21:04
CAT-THE-FIFTH
SJ,the problem is the 4C/8T SKUs are utterly screwed now - they need to get SMT sorted out in the next few months to work reliably with gaming ….

The SMT thing needs more investigation, but there is at least one bit of good news for 4C Ryzen - have you read malfunction's link?

malfunction
It looks like another scheduler type issue similar to bulldozer - windows / apps and games will need to know how to schedule / which core(s) to pick (affinity). Some theory here (if google translate is working):

http://www.hardware.fr/articles/956-22/retour-sous-systeme-memoire.html

According to that, one of the biggest issues for Ryzen in gaming (and some other workoads) is actual extremely high latency and low bandwidth between the CCXes, which is exacerbated in moderately threaded situations by Windows 10 regularly moving threads between cores. If a thread gets moved and its data is now in the other CCX's L3, it'll end up with a cache miss and a huge latency penalty getting that data back in.

Assuming 4C Ryzen works by completely deactivating one CCX (which seems logical given the halving of L3 cache as well) that won't be a problem for it - there won't be another CCX for threads to get migrated to. So part of the problem may be mitigated inherently by the method of harvesting dies…!

EDIT: looking at the SMT scaling you posted here, it looks like Civ and GTA V are least affected, which I believe are the most CPU intensive games in that list? That would make sense if Windows 10 only moves threads in situations where cores are lightly loaded - put lots of load on the cores and no thread movement so no cache misses; lightly load the cores, more thread movement, more cache misses. That'd be easily fixable in driver or scheduler - simply tell the scheduler not to move active threads…!
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 21:23
scaryjim
The SMT thing needs more investigation, but there is at least one bit of good news for 4C Ryzen - have you read malfunction's link?



According to that, one of the biggest issues for Ryzen in gaming (and some other workoads) is actual extremely high latency and low bandwidth between the CCXes, which is exacerbated in moderately threaded situations by Windows 10 regularly moving threads between cores. If a thread gets moved and its data is now in the other CCX's L3, it'll end up with a cache miss and a huge latency penalty getting that data back in.

Assuming 4C Ryzen works by completely deactivating one CCX (which seems logical given the halving of L3 cache as well) that won't be a problem for it - there won't be another CCX for threads to get migrated to. So part of the problem may be mitigated inherently by the method of harvesting dies…!

EDIT: looking at the SMT scaling you posted here, it looks like Civ and GTA V are least affected, which I believe are the most CPU intensive games in that list? That would make sense if Windows 10 only moves threads in situations where cores are lightly loaded - put lots of load on the cores and no thread movement so no cache misses; lightly load the cores, more thread movement, more cache misses. That'd be easily fixable in driver or scheduler - simply tell the scheduler not to move active threads…!

I hope that is the case TBH which is why I am still perplexed they couldn't wait another month until the windows patches dropped as they will most likely have some improvements to scheduling.

Motherboards are also a big issue currently - not only are motherboard companies scrambling to get the BIOSes upto scratch,the OEMs apparently have underestimated the number of CPUs AMD have made for sale too.

This has been needlessly rushed out.
Posted by scaryjim - Thu 02 Mar 2017 21:29
CAT-THE-FIFTH
… This has been needlessly rushed out.

I know you keep saying that, but the announcements and planning has been going on for months. I don't think it's been rushed as much as bungled - there was plenty of time to get everything lined up, motherboards were announced - what, last October when Bristol Ridge launched to OEMs? So why the BIOSes weren't ready is beyond me. AMD must've known how many chips they'd be putting into the channel weeks ago … it's almost as if they've planned the CPU/engineering side of the launch, then suddenly realised at the last minute that they've forgotten and platform and software ecosystem to go with it… :O_o1:
Posted by imadaily - Thu 02 Mar 2017 21:32
scaryjim
….That would make sense if Windows 10 only moves threads in situations where cores are lightly loaded - put lots of load on the cores and no thread movement so no cache misses; lightly load the cores, more thread movement, more cache misses…


If it's just lightly loaded cores that get shifted would there be some kind of benefit to artificially loading cores unused by the game just so that windows can't mess things around?
It'd be a super bodge job but possibly worth a go.

The ideal would be to find something that loads the required number of unused cores while using as little of the other CPU resources and memory bandwidth as possible…
Could make for an interesting benchmark: running a couple of instances of super pi in the background in an attempt to increase the in game performance!

I'm fairly sure there are reason that would mean that this wouldn't work but it would be interesting!
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 21:49
scaryjim
I know you keep saying that, but the announcements and planning has been going on for months. I don't think it's been rushed as much as bungled - there was plenty of time to get everything lined up, motherboards were announced - what, last October when Bristol Ridge launched to OEMs? So why the BIOSes weren't ready is beyond me. AMD must've known how many chips they'd be putting into the channel weeks ago … it's almost as if they've planned the CPU/engineering side of the launch, then suddenly realised at the last minute that they've forgotten and platform and software ecosystem to go with it… :O_o1:

What is another month when AMD already pushed it back another three months? I understand AMD needs to get a return on sales,but pushing out the platform before its ready is pointless.

With at least some scheduling patches,more mature motherboard BIOSes,and maybe even an extensive test to see if SMT impacted games,this would have been a better launch.

I mean there are not enough motherboards even available yet. You are more likely to buy a CPU than the motherboard to put it in.

AMD did this with the Fury X - launched it with poor QC and drivers which were not quite there yet. The RX480 was a success but the poor cooler did lose them sales to Nvidia.

Its probably why I am so irritated - its a continual thing we see with AMD.

Please,pretty,please AMD don't do this with Vega.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 21:52
A side note some of the AMD benchmarkers on XS,have look at overclocking in Ryzen. They have concluded that the R7 1700 is a low leakage part and the R7 1700X/1800X are high leakage. This is confirmed by reviews which show the R7 1700 at the same voltage consuming less power.
Posted by kompukare - Thu 02 Mar 2017 21:54
CAT-THE-FIFTH
If they can't get SMT to work reliably in games in the next few months,I think they should not bother with the 4C/8T version and just launch 4C Ryzen CPUs(it should be easier to get more to qualify as a 4C/4T SKU) and the 6C/12T at £220 to £260.

HT still has issues in gaming according to the hardware.fr numbers:
Average SMT/HT on versus off in games is -1.9% for Intel (i7-6800K) versus -8.9% for AMD.
The converse of that is that the average in apps is a gain in both cases with +22.5% for Intel and +25.6% for AMD.
So a really impressive showing for AMD's first try at SMT vs HT which Intel have been using for years (2008 in Nehalem if we ignore the P4 HT from 2002).

Speaking of Hardware.fr, it's nice that did a comparison at a constant 3.0GHz (pity they didn't include Skylake or Kabylake though although they might have to have forced them all to run as quad core for that to be meaningful).
http://www.hardware.fr/articles/956-6/piledriver-zen-broadwell-e-3-ghz.html


On average, for applications the uplift vs Piledriver is +131.9% which is impressive but Broadwell-E is +159.5%. The biggest outliner there seems to archiving where the quad channel makes such a big difference.

Still, gamers a picky bunch (who constantly seem to ignore that reviewers generally benchmark without any background applications), so it might make sense for AMD to release 5-1450. That is a 6C/6T part with 4.0GHz or more Turbo. The 5-1500 is rumoured to cost $230 (c. £220) and the 4C/4T parts (3-1200X and 3-1100) are cheaper than the 4C/8T parts by about $50, so such a 5-1450 part might be around £200.
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 22:09
So they downclocked the chips to 3GHZ and left SMT enabled.

The problem is I have an IB Core i7,and this is the issue I am telling you about. The average IPC difference with SMT enabled is BW-E having 28% higher IPC. The problem is that probably places IB as being no worse or even better in IPC than Ryzen for gaming.

I need to see comparisons core for core with SMT off with Haswell,etc to see if Ryzen is worth it. I really don't see the point of regressing in IPC or having the same IPC as IB in games for me to change over even with more cores.

I was really expecting AMD to get to Haswell level IPC for gaming,but its no point for me if they can't get there.

I might as well spend zero money and stay with what I have.
Posted by Firejack - Thu 02 Mar 2017 22:52
Tabbykatze
CAT-THE-FIFTH
But the problem is like hiding a dirty family secret its different when you get on top of it,instead of somebody “finding it out”.

Well, I think that's taking it a bit far but I somewhat agree.
Personally I think Cat has nailed the description of the situation perfectly.

Tabbykatze
The Hexus review is the best I've seen thus far. What I do find interesting is the fact that AMD falls behind in 1080p but locksteps at higher resolutions.
The reason Ryzen matches Intel at higher resolutions is because you are moving into a scenario were you are bottlenecked by the GPU, not the CPU. Any advantage Intel/AMD CPU's have over one another isn't going to show.
Posted by scaryjim - Thu 02 Mar 2017 22:55
CAT-THE-FIFTH
… I was really expecting AMD to get to Haswell level IPC for gaming,but its no point for me if they can't get there. ….

Time will tell CAT - in Civ 6 Intel actually lose more performance with HT on than AMD do so it's clearly not an inherent problem with the hardware implementation, and AMD's SMT is more efficient than Intel's overall, and really shines in heavily loaded scenarios.

I do wonder, however, if the 6 core chips will suffer more from the cross-CCX latency since they'll only have 6 threads per CCX so it's going to be more likely that related threads will not be on the same CCX. And as I said, if the 4 core chips are made by simply disabling an entire CCX they *should*, hopefully, behave more predictably…
Posted by een4dja - Thu 02 Mar 2017 23:09
Thanks for the review, looking forward to AMD Naples and it's 32-cores :) Gives me a hope they will be competitive with Skylake-EP E5-2699 V5. If they can keep power under 200w and a sustained all-core clock of at least 2.2Ghz (extra-wide vector instructions would be a bonus), the server market might be in for a shake-up too!
Posted by CAT-THE-FIFTH - Thu 02 Mar 2017 23:15
The problem is SJ,the time's a ticking. Apparently AMD is saying in its Reddit AMA(Lisa Su was answering questions there) its more an optimisation issue with devs having more experience coding for Intel CPUs and that is very worrying to hear. If its a scheduler patch you can expect over the next few months MS to get updates out or if its what you say a CCX config issue with Windows then some SKUs will be fine,but if its more at a basic level,and it needs devs to patch it for their engines and games on a case by case basis,it means loads of games might not get patched and moreover for those which are how long will it take??

Also the time's a ticking - probably at the end of the year or maybe early next Intel will have Coffee Lake out and they will have less of these issues than AMD,and better core IPC,etc.

So at this point,I will look at what Intel has. I don't want to be an experimental test subject for AMD - I want decent performance in games.

I really hope that the R5 1600X with the SMT disabled will be an upgrade for people in single core and MT games for those with SB and IB Core i7s. I hope,if not unless Ryzen 2 is out quickly Intel is a safer choice for a gamer and I say this through gritted teeth due to their rubbish product segmentation.

Edit!!

The Xeon E3 1230 V2 is from 2012,so its getting onto 5 years old.

I am not asking something world shattering for AMD to consistently beat a 5 year old Intel 4C/8T CPU with their own 4C/8T CPU,or even a 6C/6T Ryzen beating the single core scores of my CPU in gaming.

Second Edit!!

Moreover,if AMD does not fix the SMT issues with games,how can I recommend it to someone if they ask me for advice for a gaming rig??

AMD's AMA was not what I wanted to hear - its essentially devs need to fix it via additional game code,so we gave them kit to look at which sounds very nice of AMD, but actually means no time-frame since it will happen when it happens, and they think only a “few” games are affected. Hardware.fr showed its more than a few games.

This is why I didn't recommend an FX8350 over a Core i5 2500/3570K when it was launched since it was way too variable in performance.

This is the same problem here.

TBH,I think the only SKU which might be fine is the cheapest R3 4C SKUs which is targeting the Core i3 and has no SMT,and maybe the 4C/8T if what SJ is saying is true,but AMD is not filling me with confidence.

The 6C/12T and 8C/16T Ryzen CPUs cannot be used properly in those configurations for games - they are 6C and 8C for games.
Posted by watercooled - Fri 03 Mar 2017 00:21
I'm not sure where I read this (could have even been here), but another suggestion is that some CPU dispatchers e.g. in games aren't recognising Zen yet, and are falling back to a compatible code path, which may in some cases be e.g. a Phenom one based on the manufacturer ID and the fact Zen drops support for some of Bulldozer's instructions. If a code path treats Zen like Phenom i.e. no SMT, it wouldn't be surprising to see regressions in SMT-enabled performance. A fix for this could literally be as simple as modifying the CPU dispatcher to point at e.g. the Intel binaries. This applies to Windows too - I do wonder if the Windows kernel is treating Zen correctly with regard to SMT yet?

But like SJ says, this isn't an AMD-exclusive problem - Intel's SMT-enabled processors have had this issue in some programs since at least Nehalem - and much of the work around alleviating it has already been done - I don't see any reason this same scheduling can't be successfully applied to Zen with relatively minimal effort. This isn't Bulldozer with its completely different and somewhat awkward core layout, and we've already seen that AMD's SMT implementation seems impressively efficient where it can be put to use.

I don't know about anyone else, but TBH Ryzen meets or exceeds my realistic mid-ground expectations. Months ago I was expecting it, very optimistically (and based on their 40% IPC claims), to approach Haswell-E, but fall behind Skylake in lightly-threaded stuff, and probably not be terribly close in power consumption. Very recently my expectations increased in line with the published (and leaked) benchmarks - competing with Broadwell-E on average, trailing in some areas but significantly outperforming in others, and that's what AMD have delivered. I never realistically expected an 8-core, lower-clocked, workstation-type CPU to wipe the floor with a very highly-clocked, tiny die desktop CPU like the 7700k across the board in lightly-threaded stuff, so I'm not disappointed TBH. Even AMD's own benchmarks put 1T performance in-line with Broadwell-E, and that's pretty much where it is. Power consumption is IMHO impressive too; they're actually more efficient than Intel in many applications - I never really expected that either TBH!

Oh and on top of that, looking at the uArch floorplans a while ago I was a little uneasy about FPU/vector performance (and reasoning that it wasn't a massive deal if it lagged a bit here), but it seems I had little to worry about. Obviously the narrower (vs Skylake) FMA will reduce performance in a few areas, but I genuinely can't think of many desktop (or server, apart from HPC) uses that really matters to.

And I don't think I was alone in looking at the (presumed optimistic) 40% IPC uplift claims and thinking “ah but what about clock speed?” thinking AMD were deliberately avoiding that. Perhaps they were also concerned about yields at the time, but that's yet another thing they surpassed my pessimistic expectations on, with ~4GHz clocks and good efficiency to boot.

We know AMD could do with working on perception of their products on launch, but given this is a brand new architecture, on a new node (for them), with technologies they've never used before (e.g. SMT), completely new platform and chipsets, and so on… they've had a lot to contend with. Intel do what they can to avoid big risks like this, but AMD never really had a choice but to change, and Zen seems like a good base to build on going forward, much like I guess Nehalem has been for Intel.

At the end of the day, Ryzen is a HUGE improvement over the Bulldozer lineage and puts AMD back in the same ballpark as the best from Intel on most fronts - if that's not a flippin' good start for a new uArch family I don't know what is.

We're back to being concerned about teething problems and hair splitting over individual benchmarks, vs the situation of basically no competition we've been in for far too long. Yeah, it's a shame the cheaper 4C parts aren't out yet which could be a better choice for more lightly-threaded games/applications, the SMT thing should have been fixed before release day, and they should have made sure the memory benchmarks were correctly optimised, but these are all things that should be solvable in software - it's just a shame about these problems marring day 1 reviews.

@CAT: WRT to gaming recommendation with SMT - why not just turn it off for now if you feel the CPU would be a good recommendation without it?
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 00:38
watercooled
I'm not sure where I read this (could have even been here), but another suggestion is that some CPU dispatchers e.g. in games aren't recognising Zen yet, and are falling back to a compatible code path, which may in some cases be e.g. a Phenom one based on the manufacturer ID and the fact Zen drops support for some of Bulldozer's instructions. If a code path treats Zen like Phenom i.e. no SMT, it wouldn't be surprising to see regressions in SMT-enabled performance. A fix for this could literally be as simple as modifying the CPU dispatcher to point at e.g. the Intel binaries. This applies to Windows too - I do wonder if the Windows kernel is treating Zen correctly with regard to SMT yet?

But like SJ says, this isn't an AMD-exclusive problem - Intel's SMT-enabled processors have had this issue in some programs since at least Nehalem - and much of the work around alleviating it has already been done - I don't see any reason this same scheduling can't be successfully applied to Zen with relatively minimal effort. This isn't Bulldozer with its completely different and somewhat awkward core layout, and we've already seen that AMD's SMT implementation seems impressively efficient where it can be put to use.

I don't know about anyone else, but TBH Ryzen meets or exceeds my realistic mid-ground expectations. Months ago I was expecting it, very optimistically (and based on their 40% IPC claims), to approach Haswell-E, but fall behind Skylake in lightly-threaded stuff, and probably not be terribly close in power consumption. Very recently my expectations increased in line with the published (and leaked) benchmarks - competing with Broadwell-E on average, trailing in some areas but significantly outperforming in others, and that's what AMD have delivered. I never realistically expected an 8-core, lower-clocked, workstation-type CPU to wipe the floor with a very highly-clocked, tiny die desktop CPU like the 7700k across the board in lightly-threaded stuff, so I'm not disappointed TBH. Even AMD's own benchmarks put 1T performance in-line with Broadwell-E, and that's pretty much where it is. Power consumption is IMHO impressive too; they're actually more efficient than Intel in many applications - I never really expected that either TBH!

Oh and on top of that, looking at the uArch floorplans a while ago I was a little uneasy about FPU/vector performance (and reasoning that it wasn't a massive deal if it lagged a bit here), but it seems I had little to worry about. Obviously the narrower (vs Skylake) FMA will reduce performance in a few areas, but I genuinely can't think of many desktop (or server, apart from HPC) uses that really matters to.

And I don't think I was alone in looking at the (presumed optimistic) 40% IPC uplift claims and thinking “ah but what about clock speed?” thinking AMD were deliberately avoiding that. Perhaps they were also concerned about yields at the time, but that's yet another thing they surpassed my pessimistic expectations on, with ~4GHz clocks and good efficiency to boot.

We know AMD could do with working on perception of their products on launch, but given this is a brand new architecture, on a new node (for them), with technologies they've never used before (e.g. SMT), completely new platform and chipsets, and so on… they've had a lot to contend with. Intel do what they can to avoid big risks like this, but AMD never really had a choice but to change, and Zen seems like a good base to build on going forward, much like I guess Nehalem has been for Intel.

At the end of the day, Ryzen is a HUGE improvement over the Bulldozer lineage and puts AMD back in the same ballpark as the best from Intel on most fronts - if that's not a flippin' good start for a new uArch family I don't know what is.

We're back to being concerned about teething problems and hair splitting over individual benchmarks, vs the situation of basically no competition we've been in for far too long. Yeah, it's a shame the cheaper 4C parts aren't out yet which could be a better choice for more lightly-threaded games/applications, the SMT thing should have been fixed before release day, and they should have made sure the memory benchmarks were correctly optimised, but these are all things that should be solvable in software - it's just a shame about these problems marring day 1 reviews.

@CAT: WRT to gaming recommendation with SMT - why not just turn it off for now if you feel the CPU would be a good recommendation without it?

The problem is AMD is not saying this is a general windows problem with SMT - they are saying the fixes needs to be implemented by devs themselves and that is worrying even if they are “simple”.

The problem are they going to be able to patch every recent game and every future game?? Will all engines get patched?? Whats the time-frame? The AMD was very vague on this - basically its done when its done and they gave kits to devs,and that is what I got from it.

Its so easy for Intel to quietly push companies not to bother too.

Its not the odd game its happening in - its loads of games.

The problem is the 4C/8T SKU won't be better than my IB Core i7 if this is the case. The issue is that this means it will be the £260 6C version which I need to consider as an upgrade over IB 4C/8T. The problem is I don't know if the ST IPC is there yet for gaming,and the 4C/8T Core i7 7700 is not much more.

Hardware.fr with SMT enabled gives BW-E a 28% IPC increase over Ryzen in gaming. I think SB to Skylake is less than that for gaming.

Add to this I need a mini-ITX motherboard and AMD has worse support in that area anyway.

It gets worse down the line - if Intel decides to plonk the 6C/12T Coffee Lake at current Core i7 7700K pricing its going to not be great if AMD hasn't got SMT working reliably in games.

It also goes down to a basic fact - most gamers wan't to game without worrying whether you need this or that patch for your CPU.

How many people will want a Ryzen CPU if they need to disable part of it to work properly in games?

They will just not bother and get an Intel CPU anyway.

Sadly until benchmarks say otherwise,Ryzen for me is like the FX8350 against a Core i5 2500K. It has variable performance and only really gets close with MOAR cores.

For me I honestly think if you are gamer you might as well skip most of the line for the time-being until we see more action on the SMT front,and whether MS pushes some updates which alleviate the problem and if the 6C version without SMT can actually consistently match or beat a Core i5.

If you are NOT gaming the R7 series is bloody great,but for gaming its half baked.

Maybe in a few months it will be different but that will mean a few months closer to whatever new stuff Intel is launching.
Posted by watercooled - Fri 03 Mar 2017 01:01
CAT-THE-FIFTH
The problem is AMD is not saying this is a general windows problem with SMT - they are saying the fixes needs to be implemented by devs themselves and that is worrying even if they are “simple”.

The problem are they going to be able to patch every recent game and every future game?? Will all engines get patched?? Whats the time-frame? The AMD was very vague on this - basically its done when its done and they gave kits to devs,and that is what I got from it.
It's not like they would have to actively patch every game, in the same way Intel don't have to actively patch every single game to avoid SMT performance regressions - once it's known about, it can be factored into the decisions made by the CPU dispatchers - it could even be easier than it currently is - Ryzen has at least feature parity with many recent Intel CPUs so in some cases it might literally just be a case of using an Intel-optimised binary rather than an AMD-specific one (but any game devs on Hexus might want to add some detail to that). It's really nothing unusual or specific to Zen that we're seeing - the weirdness with Bulldozer core parking didn't last long before it was made a non-issue, and that was a far more unlike-Intel uArch than AMD have now.

CAT-THE-FIFTH
The problem is the 4C/8T SKU won't be better than my IB Core i7 if this is the case. The issue is that this means it will be the £260 6C version which I need to consider as an upgrade over IB 4C/8T. The problem is I don't know if the ST IPC is there yet for gaming,and the 4C/8T Core i7 7700 is not much more.
TBH there probably aren't that games you desperately need to upgrade for anyway - I don't find that many games are heavily CPU bottlenecked on even my 1055T.

CAT-THE-FIFTH
Hardware.fr with SMT enabled gives BW-E a 28% IPC increase over Ryzen in gaming. I think SB to Skylake is less than that for gaming.
That's definitely not the case for all games, maybe a couple of the worst ones, but given Ryzen considerably outperforms BW-E in many games I find that generalisation hard to believe.

CAT-THE-FIFTH
It also goes down to a basic fact - most gamers wan't to game without worrying whether you need this or that patch for your CPU.
I doubt they'd be manual - either Steam/Origin or Windows Update would most likely sort it silently for you.

CAT-THE-FIFTH
How many people will want a Ryzen CPU if they need to disable part of it to work properly in games?

They will just not bother and get an Intel CPU anyway.
So if a Ryzen CPU was a better choice than an Intel one but only if you made a single (and probably temporary) change in BIOS, then people would rather go Intel? Why not avoid i5's then since you're just getting an i7 with forced-off SMT?

CAT-THE-FIFTH
Sadly until benchmarks say otherwise,Ryzen for me is like the FX8350 against a Core i5 2500K. It has variable performance and only really gets close with MOAR cores.
We'll have to disagree on that one, I don't think it's even remotely comparable - not least because Ryzen is more than competitive even in many single-threaded benchmarks and power efficiency. It also compares favourably with equivalent core-count Intel parts and has very efficient SMT.

CAT-THE-FIFTH
For me I honestly think if you are gamer you might as well skip most of the line until for the time-being until we see more action on the SMT front,and whether MS pushes some updates which alleviate the problem.

If you are NOT gaming the R7 series is bloody great,but for gaming its half baked.
I'd be inclined to agree, and at the same time I'd say for a pure gaming rig, you're probably best off with something cheaper for your CPU.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 01:43
watercooled
It's not like they would have to actively patch every game, in the same way Intel don't have to actively patch every single game to avoid SMT performance regressions - once it's known about, it can be factored into the decisions made by the CPU dispatchers - it could even be easier than it currently is - Ryzen has at least feature parity with many recent Intel CPUs so in some cases it might literally just be a case of using an Intel-optimised binary rather than an AMD-specific one (but any game devs on Hexus might want to add some detail to that). It's really nothing unusual or specific to Zen that we're seeing - the weirdness with Bulldozer core parking didn't last long before it was made a non-issue, and that was a far more unlike-Intel uArch than AMD have now.


TBH there probably aren't that games you desperately need to upgrade for anyway - I don't find that many games are heavily CPU bottlenecked on even my 1055T.


That's definitely not the case for all games, maybe a couple of the worst ones, but given Ryzen considerably outperforms BW-E in many games I find that generalisation hard to believe.


I doubt they'd be manual - either Steam/Origin or Windows Update would most likely sort it silently for you.


So if a Ryzen CPU was a better choice than an Intel one but only if you made a single (and probably temporary) change in BIOS, then people would rather go Intel? Why not avoid i5's then since you're just getting an i7 with forced-off SMT?


We'll have to disagree on that one, I don't think it's even remotely comparable - not least because Ryzen is more than competitive even in many single-threaded benchmarks and power efficiency. It also compares favourably with equivalent core-count Intel parts and has very efficient SMT.


I'd be inclined to agree, and at the same time I'd say for a pure gaming rig, you're probably best off with something cheaper for your CPU.

Dude - multiple websites have shown regressions with SMT enabled.



Those are all MAJOR titles. Its not the first or last website to show it.

BF1 of all titles too?




















Its not isolated. Other websites tested - it performance regression in loads of titles with SMT. Thats at least 12 major titles affected by SMT problems,and some are not that small.

The AMD AMA was saying it needed to be patched by devs on a case by case basis - can you guaranteed every game made in the last three years which has issues will be patched? You can't. AMD said “you can't estimate the timeframes on these” when asked how long devs will take. They were busy moaning saying it was because more games are optimised for Intel.

Then somebody asked if a windows patch would help - they said no. Then somebody asked - surely this is a big issue?? They deflected and said “its not many games”. FFS,Hardware.fr tested 8 MAJOR TITLES and 7 of them had big performance regressions and other sites saw the same in other games.

These are just common review titles - what about other games??

I have a Core i7 3770/Xeon E3 1230 V2 and I hit CPU limits in certain games with that. I like AMD but hope won't make games run faster.

I am sorry I can't just recommend a CPU like this to an average gamer with SMT being bugged like this - the 4C/8T models are going to fail versus a Core i5 if that is the case. I sincerely hope what SJ and others have postulated is true about the CCX config being a consideration.

It only needs Intel to drop the Core i7 7700K price a bit and suddenly its 4C/8T against 6C/6T since SMT is hit and miss.

What we need is some proper core for core IPC estimates without SMT in gaming.

The only Intel CPUs which will be soundly beaten is the Core i3 since it will be 4C Ryzen against 2C/4T Intel Core i3 CPUs.

A locked Core i5 at just under £200 would probably be more consistent than the 4C/8T version if it has to fall back on 4C performance.

For example the Core i5 7500 is more or less the same speed as a stock Core i5 6600K.

AMD is also very lucky Intel locked down BCLK overclocking too,so if AMD can eke out a few 100MHZ it might help a bit.

But the SMT issues put a damper on the lower end SKUs TBH.

Its the 4C/8T SKUs which really would have put a damper on Intel in the gaming market.

Edit!!

If an enthusiast who is generally supportive of AMD despite their “issues” feels like this,I hate to think what an average gamer will think too. Its most likely they will take the benchmarks and all the reviews saying Ryzen is a miss for gaming,and just get a Core i5 anyway.

It might be better in a few months but again that means a few months closer to whatever new stuff Intel is launching.

For non-gaming performance looks very nice for the price,but sadly Ryzen 7 is a bit of a split personality.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 02:34
On the flip side under Linux and for server usage Ryzen 7 shines:

http://www.phoronix.com/scan.php?page=article&item=ryzen-1800x-linux&num=1
https://www.servethehome.com/amd-ryzen-7-1700x-linux-benchmarks/
Posted by kalniel - Fri 03 Mar 2017 08:29
CAT-THE-FIFTH
On the flip side under Linux and for server usage Ryzen 7 shines:

https://www.servethehome.com/amd-ryzen-7-1700x-linux-benchmarks/

That NAMD result makes me very happy.
Posted by azrael- - Fri 03 Mar 2017 09:25
And I'm still waiting for *official* word on the ECC capabilities of Ryzen, preferably for the desktop version.

I thought that maybe AMD might have put up some specs, but instead they've conveniently updated their entire website, which is now heavy on media fluff and quite light on actual details. The information is probably there somewhere, but I cannot find it.
Posted by kompukare - Fri 03 Mar 2017 09:41
azrael-
And I'm still waiting for *official* word on the ECC capabilities of Ryzen, preferably for the desktop version.

I thought that maybe AMD might have put up some specs, but instead they've conveniently updated their entire website, which is now heavy on media fluff and quite light on actual details. The information is probably there somewhere, but I cannot find it.

It was covered in the AMA on Reddit
https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def5ayl/
ECC is not disabled. It works, but not validated for our consumer client platform.
So basically up to the mobo maker. Now most ASRock boards list ECC but whether they mean “you can install it but ECC function is not enabled”, or that they have full support is another question.
Posted by azrael- - Fri 03 Mar 2017 09:47
kompukare
It was covered in the AMA on Reddit
https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def5ayl/

So basically up to the mobo maker. Now most ASRock boards list ECC but whether they mean “you can install it but ECC function is not enabled”, or that they have full support is another question.
Thanks for the pointer. :)
Posted by Firejack - Fri 03 Mar 2017 10:32
watercooled
I'm not sure where I read this (could have even been here), but another suggestion is that some CPU dispatchers e.g. in games aren't recognising Zen yet, and are falling back to a compatible code path, which may in some cases be e.g. a Phenom one based on the manufacturer ID and the fact Zen drops support for some of Bulldozer's instructions. If a code path treats Zen like Phenom i.e. no SMT, it wouldn't be surprising to see regressions in SMT-enabled performance. A fix for this could literally be as simple as modifying the CPU dispatcher to point at e.g. the Intel binaries. This applies to Windows too - I do wonder if the Windows kernel is treating Zen correctly with regard to SMT yet?…
This was answered on Reddit.

Thanks for the question. In general, we've seen great performance from SMT in applications and benchmarks but there are some games that are using code optimized for our competitor… we are confident that we can work through these issues with the game developers who are actively engaging with our engineering teams.
https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def4wcj/
Posted by Platinum - Fri 03 Mar 2017 10:58
Random question, so the chip has 24 PCI-E lanes, and appears to be two quad core units looking at the de lidding.
Does this mean that either the quad core versions will only come with 12 PCI-E lanes or there are PCI-E lanes not in use on the 8 core versions?
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 11:32
Firejack
This was answered on Reddit.

https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def4wcj/

Its not some - some here are having massively AMD tinted glasses on this and are having their heads in the sand.

Look at my previous posts - 12 games at least show regressions.









No amount of silly deflecting changes the fact that loads of games are affected and AMD is basically saying it will be fixed when its fixed.

Anybody trying to deny the scale of the issue or the tepid response by AMD is basically trying to hide a major issue of the CPU with gaming on purpose to deceive people.

If its not a general scheduling issue under Windows - how many games is AMD going to make sure will be fixed??

Every major game in the last few years?? Every one going forward??

Its almost like they didn't even bother trying to engage with some devs before launch.

Edit!!

What if AMD cheats and makes sure there are fixes only for some popular benchmarking titles used in reviews??
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 12:23
Anyway,since Ryzen is not only about gaming this review by The Stilt on the none-gaming abilities of Ryzen is probably one of the most in-depth released so far and covers ground many other reviews have not:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/
Posted by kalniel - Fri 03 Mar 2017 12:46
There is a terrible statement from Lisa Su in there: “Ryzen is doing well… when applications are more GPU bound” ?!? That's basically like saying ‘so long as you don’t need to use the CPU, our CPU is great!'
Posted by scaryjim - Fri 03 Mar 2017 12:52
CAT - there's a much deeper issue here than just “AMD's SMT is borked in gaming”. The charts you've shown are self-contradictory in at least one game - hardware.fr show virtually no change in Civ 6 with SMT enabled, while PCGamer have an 8% drop off.

Plus aside from Hardware.fr no-one's tested whether Intel also have SMT performance reductions in-game. Everyone's just gone “Oh look, AMD has this problem” without asking if it's an AMD problem or a game engine problem. Hardware.fr's results indicate that it's a bit of both depending on the game. Normally you'd be jumping on the fact that no-one's pointed out that Intel still have issues with HT in some games despite having had more than 8 years to optimise….

And pretty much every test up there is using a GTX 1080 at 1080p to return 100fps+. I remember the days when you used to complain bitterly about that kind of testing because it didn't reflect the experience most gamers would get! Has anyone tested with a GTX 1060 or RX 480 at 1080p? Does the CPU make a difference there?

You don't buy an 1800X and a GTX 1080 to game at 1080p. If they still have issues with the R5 1400X and a GTX 1060/RX 480, then I'll get worried. For now, it's an interesting but ultimately meaningless anomaly.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 13:07
scaryjim
CAT - there's a much deeper issue here than just “AMD's SMT is borked in gaming”. The charts you've shown are self-contradictory in at least one game - hardware.fr show virtually no change in Civ 6 with SMT enabled, while PCGamer have an 8% drop off.

Plus aside from Hardware.fr no-one's tested whether Intel also have SMT performance reductions in-game. Everyone's just gone “Oh look, AMD has this problem” without asking if it's an AMD problem or a game engine problem. Hardware.fr's results indicate that it's a bit of both depending on the game. Normally you'd be jumping on the fact that no-one's pointed out that Intel still have issues with HT in some games despite having had more than 8 years to optimise….

And pretty much every test up there is using a GTX 1080 at 1080p to return 100fps+. I remember the days when you used to complain bitterly about that kind of testing because it didn't reflect the experience most gamers would get! Has anyone tested with a GTX 1060 or RX 480 at 1080p? Does the CPU make a difference there?

You don't buy an 1800X and a GTX 1080 to game at 1080p. If they still have issues with the R5 1400X and a GTX 1060/RX 480, then I'll get worried. For now, it's an interesting but ultimately meaningless anomaly.

Because you talk about Intel having regression in what one game?? The other was within a margin of error and trying to point Civ6 is not helping your case. Some games don't show any regressions,but if Civ6 is showing that it depends on the load,it means those games might also show it too,depending on what area is tested.

You can't hide the fact that 12 games at least have major SMT issues and you and some here are being entirely disingenuous on the issue and trying to bury it. You are trying to deceive people and if I am saying its a big issue despite trying to fight the side of AMD for so long it is.

I also am on Xeon E3 1230 V2/Core i7 3770 - for me why should give a flying crap about a work in progress SMT implementation when Intel does not have the issue.

What happens if the SMT issue persists in the 4C/8T models?? What are you going to tell people to buy one over a Core i5 7500 when the latter will most likely be faster.

Even with a GTX1060 or RX480 whats the point of the 4C/8T models if the SMT is not working in games?? Its Ryzen 4C vs KL 4C.

What about the 6C/12T model if Intel drops the price of the Core i7 7700K??

Plus you are on purpose trying to bury the comments from AMD saying it needs to be optimised on a game by game basis??

That means it pretty much means for gaming SMT on Ryzen is dead in the water until it works properly just like HT in the first P4 models.

Its another AMD “work in progress”.

A lot of prospective purchasers won't care about whether AMD has not had SMT before,its not their concern.

You and I might give AMD some leeway,but plenty won't and they will just continue to get Intel CPUs and get Nvidia graphics cards.

That is my experience outside forums - most people are locked into Intel/Nvidia since they consider them more reliable.

Trying to argue with me won't change that and won't change the sales figures we see for Intel and Nvidia who are making a mint out of gamers.

AMD needs to up its game.

Just launching good hardware is not enough - software support needs to be there from day one.

A gamer is not interested in “maybe” support since “maybe” is not important for them.

Look at Apple - how much money they make from iPhones?? Apples make more money from their iPhones than the entirety of the Android market IIRC.

Android has the view of being buggier but cheaper,etc and I have no problem using Android and have zero interest in an iPhone.

I am sorry SJ,but I cannot agree with you on this.

This one buggy launch after another for AMD,and sadly amongst the public they just don't have the mindshare for people to forget these issues.

This launch still makes AMD a cheaper alternative to Intel with MOAR cores.

But this time MOAR cores(the average gamer will think SMT is MOAR cores too) might actually negatively affect performance.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 13:28
Another thing - I am not going to blindly recommend AMD just because it is AMD. Some of you have short memories - when it came to actual build threads,I rarely recommended the first Phenom or first Bulldozer CPUs,even if their were arguments that they were fine - maybe as an alternative in some cases,but for the most part I steered clear unless it was a need which suited their design.

The same goes with the ATI HD2000 series,which again,very rarely would I recommend. Even with the FX8350 I tended to push the Core i5s instead.

Even recently I avoided recommending AM3+ due to the age of the platform.

This is no different - SMT has problems and the CPU will need to stand on its own merits with SMT disabled against Intel CPUs with SMT enabled for gaming.

ATM,price for price the Core i7 7700K is still a better gaming chip,the R7 1770X and the R7 1800X costs more. The R7 1700 might be viable overclocked as it is lower leakage but the motherboards which actually can handle a proper overclock actually are more expensive than the Intel equivalents it seems,and when overclocked power draw is more than a Core i7 7700K(which is not shocking) meaning you need a better board which adds cost.

The 6C/12T models will need to be judged with SMT off against the Core i5 7600K for gaming. If Intel drops the Core i7 7700K pricing then its 6C vs 4C/8T.

I am sorry you can't just ignore SMT is having a major problem with games and even AMD is not really sure how long it will take to fix.

As a result its a non-feature for a gamer in its current state.

Anybody saying otherwise needs to look at it objectively and take off their rose tinted glasses.

You want AMD to do well - don't make excuses for their rubbish SCREW UPs.

Otherwise they will NEVER learn as a company.

Edit!!

This is also what gets my goat(or more likely my moose) - its all the excuse making by some when AMD themselves have said they do not know when it will be fixed.

So instead of calling out AMD on this,people want to bury it.

We are all for supporting the underdog(me especially) but this is taking it to a whole new level.
Posted by scaryjim - Fri 03 Mar 2017 13:32
CAT-THE-FIFTH
Because you talk about Intel having regression in what one game??

Four - they have worse regression than AMD in Civ 6, but also minor regression (2.5%) in GTA V, and significant regression (> 5%) in Watch Dogs 2 and F1 2016. That's half of the tested games.

CAT-THE-FIFTH
… You can't hide the fact that 12 games at least have major SMT issues and you and some here are being entirely disingenuous on the issue and trying to bury it. …

Not trying to bury anything. I've accepted that have an issue, I'm just pointing out that it's likely to be a lot less relevant in real world scenarios than in benchmarking ones.

CAT-THE-FIFTH
… You are trying to deceive people …

OK, that I outright take exception to. I'm offering a different interpretation of the data you provided that highlights factors you're not taking into account. I'm no more trying to deceive people that you are.

CAT-THE-FIFTH
… Its another AMD “work in progress”. …

All technology is a work in progress. You repeatedly say yourself how AMD

A lot of prospective purchasers won't care about whether AMD has not had SMT before,its not their concern
's products are never the best at launch but last longer and get better support. Why the sudden change in tune?

CAT-THE-FIFTH
… I am sorry SJ,but I cannot agree with you on this. …

No need to apologise, I have no issue with people disagreeing with me :)

Besides, I'm not convinced we really disagree that much - i think we both agree that Ryzen 7 looks like a poor buy for gamers, particularly for those targeting 1080p. I'm just a lot more optimistic about a) AMD getting fixes out for the worst of the problems, and b) Ryzen 5 Quads cores being much less affected by the issues.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 13:45
scaryjim
snip

The point is 12 MAJOR TITLEs maybe more are affected,and we cannot say how many more are affected - that is the issue.



That response is worrying - we don't know how long it will take.

That is just PR talking - 12 MAJOR titles is not small and that is only the ones reviewers use.

I honestly hope what you said about the 4C models is true and that AMD PR have gotten slightly confused about whether MS scheduler updates will not help.

If not it could take a long time for SMT to work in games.

As a result we need to be conservative and judge Ryzen in games without SMT until we know it works more consistently.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 14:49
CAT-THE-FIFTH
Anyway,since Ryzen is not only about gaming this review by The Stilt on the none-gaming abilities of Ryzen is probably one of the most in-depth released so far and covers ground many other reviews have not:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/

Anyway I will quote this again - people should read this. For non-gaming purposes Ryzen is going to be quite strong indeed and its performance at 3GHZ and performance/watt are impressive. It shows AMD has something which will do well for not only workstations but even for server usage.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 16:23
On a side-note SJ,was having a discussion about this elsewhere and one or two people think AMD PR might have mis-communicated things a tad. They seem to hint its a game by game optimisation which is needed when only specific games in the past have shown this to be required in the past,and that general scheduler patches under Windows might suffice for most of them.

But I should know better than believing AMD PR fully - overclockers dream anyone??; ) They do have a habit of sometimes digging holes for themselves.
Posted by Platinum - Fri 03 Mar 2017 16:39
Looking forward to seeing how hte Opterons perform, could be a good source if income that, a 1u 128 thread beast could appeal to some
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 16:47
Platinum
Looking forward to seeing how hte Opterons perform, could be a good source if income that, a 1u 128 thread beast could appeal to some

They are making a separate model for that it seems - apparently the desktop chips can run all cores at lower clockspeeds with ridiculous efficiency.

Downclocked 8C/16T samples can score 850 points in CB R15 at only 30W,ie,around the same score as a Core i7 4790K.

It seems optimal performance/watt is upto 3GHZ and from there efficiency is somewhat less.
Posted by Platinum - Fri 03 Mar 2017 17:33
CAT-THE-FIFTH
They are making a separate model for that it seems - apparently the desktop chips can run all cores at lower clockspeeds with ridiculous efficiency.

Downclocked 8C/16T samples can score 850 points in CB R15 at only 30W,ie,around the same score as a Core i7 4790K.

It seems optimal performance/watt is upto 3GHZ and from there efficiency is somewhat less.

Laptop monster APU incoming?
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 17:36
Platinum
Laptop monster APU incoming?

Its pretty mental having 8C cores at 30W!! Imagine a 4C/8T Ryzen APU with one CCX replaced by an IGP??

AMD might be able to have 4C/8T APUs in 15W to 30W TDPs!!
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 17:46
Just saw this mentioned on AT forums:

In regards to gaming ASUS in particular, and MSI to some extent. It explains why reviewers such as Joker, Crit, UFDiciple, and TechDeals had far better gaming performance.

Golem.de in Germany had this to say in regards to their MSI motherboard.


https://translate.google.co.uk/translate?hl=en&sl=de&u=https://www.golem.de/news/ryzen-7-1800x-im-test-amd-ist-endlich-zurueck-1703-125996-4.html&prev=search

The MSI board was delivered with BIOS version 113, until last Friday a new one appeared.

Version 117, which is still up-to-date, improved speed and stability. If we were still able to count on sporadic Bluescreens with the older UEFI, the board is currently stable. Much more important, however, is the drastically higher performance in games and the real pack with 7-Zip. The release notes include, among other things, a fixed problem with the memory act and its timing as well as the voltage.

Compared to the original bios, the new UEFI increases the image rate in our game course between plus 4 and plus 26 percent, on the average even plus 17 percent!




Why AMD,why u do this at EVERY launch,rush??

So to summarise:
1.)Launch CPU one month before win patches drop
2.)SMT reduces performance but don't say anything to save face and most reviews have sub-par gaming performance. Reviewers who tested with SMT off get better performance. AMD should have just been honest and told reviewers their SMT implementation was not fully supported in games
3.)Windows power plans affect performance(probably due to lack of windows support) but I assume AMD missed that
4.)Motherboards are buggy
5.)Hardly any motherboard stock
6.)On Reddit AMA one PR person give vague answers which basically said devs need to patch performance,but it might just windows updates are needed in many cases
7.)They could have waited one month and negated many of these issues
Posted by kompukare - Fri 03 Mar 2017 18:17
CAT-THE-FIFTH
They could have waited one month and negated many of these issues
I also think that they had to launch some time. Possibly they should only have launched well binned 1800X Pro versions and said these are workstation-only parts and the rest of the desktop parts will come in a month's time.

That way the mobo's and microsoft would have a chance to get things sorted, and only a few people would be trying to bench games.

As I said, gamers very fussy and are easily convinced that they need 300+ FPS at 720P on a clean-install machine, whereas at realistic settings with background applications octocore Ryzen might actually have better performance especially for those who stream. The other irony is that Ryzen might age better than Kabylake but like with Hawaii vs 780Ti nobody cares.
Posted by watercooled - Fri 03 Mar 2017 18:19
CAT-THE-FIFTH
Dude - multiple websites have shown regressions with SMT enabled.
I was replying to your 28% IPC increase claim - I'm just not seeing that; many of the regressions are, let's be honest, negligible. As I said, if you think (I'm not suggesting one way or another here, it really depends what you're after) that a Ryzen CPU would be a good purchase if it simply didn't have SMT - just disable it temporarily? And WRT my point about SMT efficiency - it is indeed very efficient outside of gaming, often exceeding Intel's SMT efficiency, we can't simply ignore that or claim it's all bad.

Intel CPUs regress in performance with SMT on too in many games, usually not enough to care about outside of hair-splitting, but it's a fact nonetheless. I was aware of what AMD posted on Reddit but despite that I stand by what I said as I don't think that's the whole story.

CAT-THE-FIFTH
The AMD AMA was saying it needed to be patched by devs on a case by case basis - can you guaranteed every game made in the last three years which has issues will be patched? You can't.
Lots of the ‘issues’ are hair-splitting (not all, obviously), and given the more challenging Bulldozer issues were largely resolved by Windows scheduler updates, I fail to see why expecting something similar for Ryzen is unreasonable. Techspot also said that they subjectively experienced smoother gaming with Ryzen despite the numbers: http://www.techspot.com/review/1345-amd-ryzen-7-1800x-1700x/page7.html

It's not like future games will have to be ‘patched’ on a case-by-case basis at all, even if Windows do nothing - they just treat Ryzen like they currently treat Intel's SMT. This is *exactly* the sort of problems we saw with SMT enabled on Intel CPUs a few years back, one which has ceased to be a big deal, even for the majority of older games.

You're irritated that AMD didn't manage to portray Ryzen in the best possible light by avoiding stories like this, I get it, and I agree with that part. But it's a brand new platform - just give it a chance!

CAT-THE-FIFTH
Then somebody asked if a windows patch would help - they said no.
That's not how I read it, and pay attention to who's posting what - that reply was made by a marketing guy, not an engineer. Regardless of what is being said on reddit, CPU scheduling is largely down to Windows.


azrael-
And I'm still waiting for *official* word on the ECC capabilities of Ryzen, preferably for the desktop version.
On Reddit, as I understand it they said it supports it, it works fine, but it's not validated in the same way it would be on their Opteron processors. You might still want to double-check the motherboards are happy with it though.

Platinum
Random question, so the chip has 24 PCI-E lanes, and appears to be two quad core units looking at the de lidding.
Does this mean that either the quad core versions will only come with 12 PCI-E lanes or there are PCI-E lanes not in use on the 8 core versions?
What you're seeing on the de-lidding is two solder squares - it's a single die, not an MCM.

Edit: just noticed your latest post CAT (won't get time to fully catch up on this thread as I'm off out in a mo): That looks promising. But I fully agree that it would make yet another AMD facepalm moment that they're a BIOS patch off a far better reception!

Edit2: Just realised that I've completed skipped over an important bit of the thread, sorry about that! Seems I might have been right to not fully rely on what AMD marketing were saying.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 18:30
The thing is look at how the R7 1800X looks with SMT off now.



So this is the issue - with SMT on it looks more like a Core i7 3770K. With it off,more like a Core i7 4790K.

At Haswell level it looks like AMD has caught up to a generation before Intel which is what many of us expected.

Many of the worst reviews had it at Ivy Bridge level which is a Piledriver era Intel design.

Add to that the motherboards are regressing performance too,now you are starting to see why some reviews looked much better and others terribad.

Add to that that the chap on Reddit was the Technical Marketing guy,and he blatantly was saying it was not a windows issue,which was not entirely true as BD had improvements in games with windows patches.

Combine all this together and AMD really has made Ryzen look worse for gaming then it really is.

Its death by a 1000 cuts most of which AMD might have quietly avoided or negated to some degree.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 19:25
First picture of an AM4 mini-ITX motherboard leaked:

https://www.techpowerup.com/231205/biostar-shows-off-first-mini-itx-socket-am4-motherboard

It uses the X370 chipset.
Posted by CAT-THE-FIFTH - Fri 03 Mar 2017 20:16
https://youtu.be/P1dhYDm7SLw?t=2543

Some comments from JayzTwoCents - looks like the launch was utterly rushed for GDC.
Posted by stevie lee - Fri 03 Mar 2017 21:14
CAT-THE-FIFTH
https://youtu.be/P1dhYDm7SLw?t=2543

Some comments from JayzTwoCents - looks like the launch was utterly rushed for GDC.

(I have very limited knowledge on this subject)
they also said that windows 8 and above needs CPU drivers loaded.
what about windows 7? does that need one?
anyone tested win 7 too see if the SMT thing affects the results there?
clicked through a few tests in that http://forums.hexus.net/pc-hardware-components/370921-ryzen-review-thread.html and they're testing on windows 10.

theres already links to Linux showing the ryzens are stonkingly good

despite AMD dumping support for windows 7 https://www.pcgamesn.com/amd/ryzen-windows-7-drivers they might actually not have the SMT problem.

what I'm asking is. has anyone tested ryzens on win 7 to see if SMT is a problem there or not?
it could narrow down whether its a driver problem for windows 10 or its the ryzen chip itself at fault
Posted by CAT-THE-FIFTH - Sat 04 Mar 2017 21:07
People are starting to do their own tests now:

https://www.reddit.com/r/Amd/comments/5xgths/smt_configuration_error_in_windows_found_to_be/
https://www.reddit.com/r/Amd/comments/5x54ww/my_theory_on_why_ryzen_does_not_perform_in_games/defc6un/

The problem is currently under Windows the physical cores and the logical cores are being treated the same by the Windows scheduler which also does not seem to recognise the cache size properly.




stevie lee;3777484
(I have very limited knowledge on this subject)
they also said that windows 8 and above needs CPU drivers loaded.
what about windows 7? does that need one?
anyone tested win 7 too see if the SMT thing affects the results there?
clicked through a few tests in that http://forums.hexus.net/pc-hardware-components/370921-ryzen-review-thread.html and they're testing on windows 10.

theres already links to Linux showing the ryzens are stonkingly good

despite AMD dumping support for windows 7 https://www.pcgamesn.com/amd/ryzen-windows-7-drivers they might actually not have the SMT problem.

what I'm asking is. has anyone tested ryzens on win 7 to see if SMT is a problem there or not?
it could narrow down whether its a driver problem for windows 10 or its the ryzen chip itself at fault

The Linux kernel already has had a whole host of Ryzen specific updates,so it means currently it is better supported under Linux under Windows.

So combine that with the fact the motherboard companies only had three weeks to get the BIOSes out,there is most likely performance being left on the table for gaming.

Another issue,is Ryzen is not fully optimised for since its new and most games apparently(according to AMD) are more optimised out of the box on Intel.

The last issue might take more time as a whole to get over,but the first two are more dependent on MS pushing updates out(apparently first in a month) and the second is motherboard companies simply needing more time.
Posted by DanceswithUnix - Sat 04 Mar 2017 21:28
I thought the advice for buying Intel for years was to get an i5 not an i7 if you are gaming, from what people are complaining about here has that changed and we expect an i7 to be the best now? Or was that purely that the i7 is such poor value for money you are better off putting the funds into the graphics card (which is certainly true for the budgets I play around with).
Posted by stevie lee - Sat 04 Mar 2017 21:41
theres still going to be millions of people who get ryzen, stick with win 7 for many reasons, what performance are they going to get?

they'll read about the SMT driver, and be forever waiting for it because it will never come.
but does win 7 need the driver? or is it entirely BIOS?
if its chipset drivers, will AMD/mobo company bother with win 7 drivers?

ryzen may be forever crippled on win 7. just need someone to test it on win 7 to see.
Posted by CAT-THE-FIFTH - Sat 04 Mar 2017 21:44
DanceswithUnix
I thought the advice for buying Intel for years was to get an i5 not an i7 if you are gaming, from what people are complaining about here has that changed and we expect an i7 to be the best now? Or was that purely that the i7 is such poor value for money you are better off putting the funds into the graphics card (which is certainly true for the budgets I play around with).

Its been heading towards the Core i7 for a while on enthusiast forums and I think you are not looking at this correctly.

This is not a £200 CPU. It starts at £320+ which is Core i7 money and its the type of money people buying £600 GTX1080s(or £350 to £450 GTX1070s) and the like will be looking at,not somebody buying a £200 RX480 or GTX1060.

Moreover,AMD several times compared Ryzen to the Core i7 6900K,so the gamers who would spend £300+ on a CPU expected it would be more or less a Core i7 6900K,which for gaming it isn't. They are the ones who overhyped the gaming abilities of the CPU,not Intel.

In the end at £320 to £500,Ryzen 7 is fighting the Core i7 7700K and Core i7 6800K in its price-range for high-end gaming,and if AMD knew it had general issues with SMT in gaming under Windows due to lack of both Windows and game support,its their problem. The fact that they themselves within a day or two were saying games had better Intel optimisations and the first Windows drivers would be out in 30 days,is indicative they knew about the issue and yet despite this tried to hint it matched a Core i7 6900K in gaming which it didn't.

The worse thing is that in non-gaming cases,it actually did live up to what they said,ie,it is a Core i7 6900K competitor,so I honestly don't know what they were thinking?? They did the same with the Fury X which was solid once its cooler issues were fixed,but positioned it as a GTX980TI beater which it wasn't at launch,and it is why you did get a degree of backlash towards it.

Edit!!

This is by far the most expensive range of CPUs has made since probably the Phenom or Athlon X2 days,so again its competing with some very expensive high end Intel CPUs.

stevie lee;3777802
theres still going to be millions of people who get ryzen, stick with win 7 for many reasons, what performance are they going to get?

they'll read about the SMT driver, and be forever waiting for it because it will never come.
but does win 7 need the driver? or is it entirely BIOS?
if its chipset drivers, will AMD/mobo company bother with win 7 drivers?

ryzen may be forever crippled on win 7. just need someone to test it on win 7 to see.

I think MS will be the ones who might need to be asked about that - its interesting how both AMD and Intel both magically decided to not support Windows 7! ;)
Posted by kalniel - Sat 04 Mar 2017 21:51
CAT-THE-FIFTH
This is not a £200 CPU. It starts at £320+ which is Core i7 money and its the type of money people buying £600 GTX1080s and the like will be looking at,not somebody buying a £200 RX480 or GTX1060.

Disagree there Cat - I'll buy a CPU to last over 5 years, during which time I expect to buy two or three GPUs. Consequently my budget for my next CPU is up to £400, while I'll try and keep GPUs to less than half that.
Posted by CAT-THE-FIFTH - Sat 04 Mar 2017 22:07
kalniel
Disagree there Cat - I'll buy a CPU to last over 5 years, during which time I expect to buy two or three GPUs. Consequently my budget for my next CPU is up to £400, while I'll try and keep GPUs to less than half that.

Well you are not a common case,and even then you went with a Core i7 950 on an expensive socket 1366 platform which was one of the fastest CPUs for gaming out there which had some of the best single core and MT performance at the time for gaming. It was third but the highest socket 1366 chip,until the Core i7 980X came along.

It would be an equivalent of a Core i7 6850K in todays HEDT line-up.

You didn't buy a Phenom II X4 or a Core2 quad which many people were suggesting to get at the time for a normal PC,or even a Core i5 750(depending on when you got your CPU).

Plus did you buy your CPU just for gaming or for more than one purpose?? I will suggest to one of my mates to get an R7 1700,since he only casually games,but actually needs those extra threads under Linux for work related stuff. I know people who have HEDT rigs and higher end CPUs who don't really game,ie,will have a £100 card in one,but this is not the crowd I am talking about.

You need to realise AMD compared this directly to a Core i7 6900K for gaming - they were the ones hyping it. This is with all the motherboard BIOS issues,SMT issues,windows issues,games optimisation,issues,etc.

Yet,look at the sigs of many people who do buy such CPUs for gaming - many are rocking £400 to £600 CPUs on there. I know people in real-life who spend that much on CPUs and GPUs. Not all are running 4K screens - I know people who have 1080P screens running such hardware,since they want very high framerates for 120HZ screens,or VR headsets.

This is why saying it has “good enough performance for gaming” seems rather disingenuous when I hear it at times(you hear it being said on a few forums),which is great if you buy this and won't ever push it for gaming.

Unless you cherry pick reviews the Core i7 7700K is generally ahead and even the Core i7 6800K which should be given a good beating holds it own.

Imagine all those people who jumped on the Core i7 5820K when it dropped to well under £300,the year before last?? They seemed to have done very well for themselves.

Edit!!

I also don't understand why people are trying to be annoyed with me pointing out what independent reviews have shown and what reviewers have said??

If you have an issue with what I have said you need to have a go at reviewers showing that data,not me. You need to be annoyed at AMD for rushing it out - that is what people like JayzTwoCents have been saying amongst other things.

When the FX8350 came out,it was reasonably strong in productivity stuff.

Yet,how many times did I recommend people on here to get a Core i5(or even a Core i7) over one - it is what it is.

Maybe once we get some OS patches out,better motherboard BIOSes,games patches,etc the Ryzen 7 will decimate the Core i7 7700K and Core i7 6900K in gaming. Not denying it might happen.

But that is not the reality now and it is a weakness of the chip,which AMD needs to work on.

Edit!!

Even AMD has somewhat indicated they need to improve gaming performance FFS!!
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 00:22
Some good news for Stevie Lee,the SMT issue is down to Windows 10. It looks like The Stilt has done even more testing:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775732


I did some 3D testing and eventhou there is not nearly enough data to confirm it, I'd say the SMT regression is infact a Windows 10 related issue.
In 3D testing I did recently on Windows 10, the title which illustrated the biggest SMT regression was Total War: Warhammer.

All of these were recorded at 3.5GHz, 2133MHz MEMCLK with R9 Nano:

Windows 10 - 1080 Ultra DX11:

8C/16T - 49.39fps (Min), 72.36fps (Avg)
8C/8T - 57.16fps (Min), 72.46fps (Avg)

Windows 7 - 1080 Ultra DX11:

8C/16T - 62.33fps (Min), 78.18fps (Avg)
8C/8T - 62.00fps (Min), 73.22fps (Avg)

At the moment this is just pure speculation as there were variables, which could not be isolated.
Windows 10 figures were recorded using PresentMon (OCAT), however with Windows 7 it was necessary to use Fraps.
Posted by kalniel - Sun 05 Mar 2017 09:34
CAT-THE-FIFTH
Well you are not a common case,and even then you went with a Core i7 950 on an expensive socket 1366 platform which was one of the fastest CPUs for gaming out there which had some of the best single core and MT performance at the time for gaming. It was third but the highest socket 1366 chip,until the Core i7 980X came along.

It would be an equivalent of a Core i7 6850K in todays HEDT line-up.

You didn't buy a Phenom II X4 or a Core2 quad which many people were suggesting to get at the time for a normal PC,or even a Core i5 750(depending on when you got your CPU).

Plus did you buy your CPU just for gaming or for more than one purpose?? I will suggest to one of my mates to get an R7 1700,since he only casually games,but actually needs those extra threads under Linux for work related stuff. I know people who have HEDT rigs and higher end CPUs who don't really game,ie,will have a £100 card in one,but this is not the crowd I am talking about.

All good points. There were several reasons I went for my chip - storage/PCI-E and memory bandwidth were among them as I was looking ahead to SSDs and the likes. And yes, I did also have some scientific and photo processing usage in mind. Spot on with the 6850K equivalent - I was actually considering a Xeon E5 1650 which is more or less the same thing.

This is why saying it has “good enough performance for gaming” seems rather disingenuous when I hear it at times(you hear it being said on a few forums),which is great if you buy this and won't ever push it for gaming.
Indeed, kind of like saying ‘our CPU is fine if you don’t need a CPU' :p
Posted by DanceswithUnix - Sun 05 Mar 2017 10:00
CAT-THE-FIFTH
Its been heading towards the Core i7 for a while on enthusiast forums and I think you are not looking at this correctly.

There is no “correctly” about it Cat, there are as many ways of looking at these things as there are users. For me gaming has always been an important secondary use of the PC, but at the time I bought my 8350 it was for work. After a change of job, I don't tend to use the home PC for compiling and simulating as much so gaming has become a bigger role for it, but I regularly get laugh out loud moments when I see how the chip I paid £125 for all those years ago is doing in modern benchmarks. It still lags the 3570K that I could have paid a lot more money for by about the same percentage, from memory I think it was 40% at the time for gaming but the graph above shows 30% so perhaps it has closed the gap a bit.
Posted by scaryjim - Sun 05 Mar 2017 12:07
CAT-THE-FIFTH
Some good news for Stevie Lee,the SMT issue is down to Windows 10. …

Fits with what we know about the cache performance and Win 10's preference for moving threads around, and also AMD's statement here that the Win 10 scheduler is overly loading the virtual threads. Interesting that the Win 7 scheduler doesn't make the same mistake though…!

kalniel
… Indeed, kind of like saying ‘our CPU is fine if you don’t need a CPU' :p

Isn't it more like “We're plenty fast enough to keep up with your GPU, even if we're not absolutely as fast as the opposition”? Case in point, many games that were CPU limited on Vishera are GPU limited on Zen with the same GPU/settings. You still need a CPU, you just don't need the absolute fastest CPU…
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 12:10
DanceswithUnix
There is no “correctly” about it Cat, there are as many ways of looking at these things as there are users. For me gaming has always been an important secondary use of the PC, but at the time I bought my 8350 it was for work. After a change of job, I don't tend to use the home PC for compiling and simulating as much so gaming has become a bigger role for it, but I regularly get laugh out loud moments when I see how the chip I paid £125 for all those years ago is doing in modern benchmarks. It still lags the 3570K that I could have paid a lot more money for by about the same percentage, from memory I think it was 40% at the time for gaming but the graph above shows 30% so perhaps it has closed the gap a bit.

Dude,stop trying to twist things - you know very well I am talking about people with a bias on gaming who spend this kind of money,and you talked specifically about gaming and some of you are just trying to shift my argument to say but not everyone games intensively.

Then if that is the case you can get a £65 Pentium G4560 which does perfectly well in a number of games with a £150 card. You know I made a thread about it:
http://forums.hexus.net/pc-hardware-components/368437-intel-pentium-g4560-new-budget-champion.html

You made the comment, you thought people would buy only a Core i5,and I said no its not the case - the more enthusiast end of the market has been moving towards Core i7 CPUs for years,and some of you are just trying to bury the gaming results.

Some of you need to stop drinking the kool-aid when you know I have done builds like this:
http://forums.hexus.net/pc-hardware-components/217569-quick-build-check.html

That is a build I did in 2011 for a mate who primarily wanted a cheapo rig to so some bio-informatics work and he does coding for his projects in his lab. He does game but he is casual,and hence why I suggested he get a £100 Phenom II X6 1045T over a £150 Core i5 750 due to throughput reasons.

He is the mate who I suggested should get an R7 1700 due to his needs - he is not going to worry so much if Ryzen is not all that for gaming yet,but if he was only going to game,I know he would probably not spend more than £150 to £200 on a CPU at most.

But from the point of view from a person who is a gamer,does image editing,runs the odd VM and does some video encoding,Ryzen 7 is a disappointment for me,well actually more Ryzen itself not the SKU.

You might not have read all the reviews,but I have and in certain games I play IPC is BELOW IB level or no better.

If I buy even a Ryzen 5 and its in the same state now,I would actually REGRESS in games performance in some games I play and I am CPU limited despite people trying their best to hide that.

I am not some muppet that has an inability to see what bottlenecks I do have in my system.

I have to be objective about this and even on OcUK forums for all the fanfare of people buying Ryzen many are also realistic about the performance issues,and this whole rushed launch.

Its like at 9/10 of AMD launches,but,but if wait longer things will get better.

Yet,how many of the review sites who benchmarked the FX8150 and FX8350 benchmarked after the windows updates,etc to show performance 6 and 12 months after launch??


kalniel
All good points. There were several reasons I went for my chip - storage/PCI-E and memory bandwidth were among them as I was looking ahead to SSDs and the likes. And yes, I did also have some scientific and photo processing usage in mind. Spot on with the 6850K equivalent - I was actually considering a Xeon E5 1650 which is more or less the same thing.

Indeed, kind of like saying ‘our CPU is fine if you don’t need a CPU' :p

I thought I was going slightly potty kalniel,but you get what I am saying. I am already on an IB CPU,and some of the benchmarks are not really an improvement(or a regression) over a CPU which I had for years and launched in 2012.

Is it too much for me to expect AMD to convincingly beat a 2012 CPU or even a 2013 Haswell CPU in single core performance in a number of games I play??

I am not expecting the earth from AMD,but how far can even a person who likes AMD and wants to support them,meant to temper their expectations??

Now we all need to live in hope to see if the first set of Windows patches and more mature BIOSes will turn around the results in the games I am looking at.
Posted by kalniel - Sun 05 Mar 2017 12:17
scaryjim
Isn't it more like “We're plenty fast enough to keep up with your GPU, even if we're not absolutely as fast as the opposition”? Case in point, many games that were CPU limited on Vishera are GPU limited on Zen with the same GPU/settings. You still need a CPU, you just don't need the absolute fastest CPU…
Isn't that in danger of driving people to hold on to their old intel chips or replacing them with cheaper intel low end ones?
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 12:22
scaryjim
Fits with what we know about the cache performance and Win 10's preference for moving threads around, and also AMD's statement here that the Win 10 scheduler is overly loading the virtual threads. Interesting that the Win 7 scheduler doesn't make the same mistake though…!

Its very worrying unpatched Windows 7 is fine and Windows 10 isn't??

I do wonder if a fresh install of Windows 10 will do the same?? ;)




scaryjim
Isn't it more like “We're plenty fast enough to keep up with your GPU, even if we're not absolutely as fast as the opposition”? Case in point, many games that were CPU limited on Vishera are GPU limited on Zen with the same GPU/settings. You still need a CPU, you just don't need the absolute fastest CPU…

Dude,you need to consider some of us have better CPUs,and are still hitting CPU limitations. For me actually getting a Core i7 6700 now secondhand would actually improve performance in the games I play the most in - despite that I actually tempered my own expectations and the performance jump in what I will play for the foreseeable future,just as a hedge to support AMD and because games will tend to thread better(was looking at the R5 1600X and a mini-ITX motherboard).

The issue is that upgrading to Ryzen in its current state won't do that - so basically whats the point?? Yes games will thread better and yes AMD will get better games optimisations in new titles,but between then and now,I still want to run things on my computer.

If its not going to improve on what I run now,then whats the point of upgrading - I might as well wait another 12 to 24 months when games are more threaded,I got bored of the games on older engines I am playing now,and then look at whats available.

ATM,we are just waiting in hope that fixes to the windows and BIOSes will improve performance.

This would have all been not needed,if AMD actually did wait a month and launched Ryzen with these in place. I have said so many times AMD needed to get on top of its launches,and repeatedly it does not. Its always something is not quite right.

I linked to the JayzTwoCents for a reason - listen to what he has to say about it and places like TH touched on the same things.
Posted by HalloweenJack - Sun 05 Mar 2017 12:52
I did some 3D testing and eventhou there is not nearly enough data to confirm it, I'd say the SMT regression is infact a Windows 10 related issue.
In 3D testing I did recently on Windows 10, the title which illustrated the biggest SMT regression was Total War: Warhammer.

All of these were recorded at 3.5GHz, 2133MHz MEMCLK with R9 Nano:

Windows 10 - 1080 Ultra DX11:

8C/16T - 49.39fps (Min), 72.36fps (Avg)
8C/8T - 57.16fps (Min), 72.46fps (Avg)

Windows 7 - 1080 Ultra DX11:

8C/16T - 62.33fps (Min), 78.18fps (Avg)
8C/8T - 62.00fps (Min), 73.22fps (Avg)

taken from anandtech
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 12:56
HalloweenJack
taken from anandtech

Already posted here and in the review thread!! :p
Posted by stevie lee - Sun 05 Mar 2017 13:45
CAT-THE-FIFTH
Some good news for Stevie Lee,the SMT issue is down to Windows 10. It looks like The Stilt has done even more testing:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775732

thanks CAT :thumbsup:

did have another thought though about the disabling of SMT.
with it disabled you get say 15% more performance.
that doesn't mean that when the patch is finished and rolled out that you'll only get 15% more with it turned on. you might get 30% for instance. it has been mentioned that the AMD chip is more efficient with memory and stuff . so the SMT bug may be holding it back quite a lot. :undecided

(% numbers just examples)

CAT-THE-FIFTH
Its very worrying unpatched Windows 7 is fine and Windows 10 isn't??.

not for the people who don't want windows 10 it isn't :p
if what I said above comes true though, then they'll have reason to worry.

just need to wait and see..



as for the rushed launch and waiting another month. will the US tax deadline of April 15th have anything to do with it?
maybe AMD wanted to shift some physical stock to keep shareholders happy by attempting to post a gain for once. the chips were ready software/drivers can wait, especially if they use everyone who buys one as surrogate Q&A testers.
it may be an even worse situation if they did hold back another month or 2 for ‘driver optimising’ AMD then saying ‘this is the best we can do, now launch’ and them still having problems.
they just need to work on their PR communication a lot, as you say, then maybe these problems wont turn into even bigger problems where no-one even considers AMD anymore because of the sheer perceived incompentence in doing anything, even though quite a lot of it is then patched quickly after launch.

that my thoughts on this whole matter. :mrgreen:
Posted by HalloweenJack - Sun 05 Mar 2017 13:46
CAT-THE-FIFTH
Already posted here and in the review thread!! :p

this is the review thread :P

https://www.youtube.com/watch?v=SE4sxXva9Eg
Posted by Xlucine - Sun 05 Mar 2017 14:09
CAT-THE-FIFTH
Already posted here and in the review thread!! :p

If someone made the new AMD chitchat thread this would all be so much simpler… :innocent:
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 14:12
HalloweenJack
this is the review thread :P

https://www.youtube.com/watch?v=SE4sxXva9Eg

Ok,the other one!! :p

Xlucine
If someone made the new AMD chitchat thread this would all be so much simpler… :innocent:

Hint taken!!:p
Posted by DanceswithUnix - Sun 05 Mar 2017 14:18
stevie lee;3778030
as for the rushed launch and waiting another month. will the US tax deadline of April 15th have anything to do with it?
maybe AMD wanted to shift some physical stock to keep shareholders happy by attempting to post a gain for once.

Hmm, there was rumour of 1M chips made for the launch. If true, at an average of $400 to make the maths easy, that would be $0.4B in stock that could either be shifting or sitting there attracting interest payments. I can see an attraction in getting them sold, end of year or not!
Posted by watercooled - Sun 05 Mar 2017 14:41
(WRT results mentioned by HalloweenJack): Those sort of results obviously imply this should be resolvable with a Windows scheduler patch, going back to what I said in the first place - the suggestion that this would require individual game patches never made any sense, and as far as game optimisations go, this is no different to any other CPU release where developers can deploy architecture-specific optimisations e.g. use a CPU dispatcher to select a binary depending on what CPU family the code is running on.

It's perplexing that AMD never got this sorted before day-1 reviews, which as CAT says is exactly the sort of thing that they stumble on time and time again, and it's something you can be very sure the engineers were well aware of.

Just a thought, maybe it's one reason behind the decision to launch the lower core-count parts later in the year, which at least on some sites will encourage a re-visit of the 8 cores. Having a product out there puts it in the hands of developers without them having to worry about NDA's, and it allows time for things to settle before launching other products, including the APUs.

WRT the inter-CCX communication - this is yet another scheduler responsibility, and in some ways similar to how NUMA is treated - the scheduler has to be aware of latency between cores (in the case of NUMA, because of different sockets with their own memory controllers) in order to make the best decision about where to place threads. Just because the bandwidth is lower or latency is higher for what will be very occasional cross-CCX cache accesses (especially given the L3 is a victim cache), doesn't imply a problem - that's what prefetchers are for. The L3 is not used for prefetching on Zen, the L2 is, and remember the L2 is very large - twice the size of Intel's. Swings and roundabouts. What won't help is if Win10 is not aware of this and scatterguns threads around the CPU completely at random, but this is true to some extent with most CPUs - local access will nearly always be considerably faster than accessing cache from neighbouring cores (like Intel's L3 cache slices, connected with the ring bus).
Posted by HalloweenJack - Sun 05 Mar 2017 15:41
I do wonder if the Win7 AMD scheduler patch actually helps zen , as results show win7 is a better OS at this time
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 15:51
Time to buy SJ,a big virtual beer?? :p

Tests with 4C on two CCX or one CCX indicate the single CCX has generally better performance:

http://www.pcgameshardware.de/Ryzen-7-1800X-CPU-265804/Tests/Test-Review-1222033/

Yep,and it seems the 4C versions which appear to use only one CCX might be fine.

Phew!!
Posted by Corky34 - Sun 05 Mar 2017 16:17
watercooled
….the suggestion that this would require individual game patches never made any sense, and as far as game optimisations go, this is no different to any other CPU release where developers can deploy architecture-specific optimisations e.g. use a CPU dispatcher to select a binary depending on what CPU family the code is running on.

Doesn't DX12 allow developers to do their own optimisation and stuff though, I've not bothered looking into what each part of that involves but could that explain AMD's comment about game developers needing to patch their games, so DX12 games need patching while DX11 need an OS patch. :undecided
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 16:24
Corky34
Doesn't DX12 allow developers to do their own optimisation and stuff though, I've not bothered looking into what each part of that involves but could that explain AMD's comment about game developers needing to patch their games, so DX12 games need patching while DX11 need an OS patch. :undecided

Thats the thing - if they actually made a distinction it would have helped,but somehow it is not surprising.
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 16:55
How to install Ryzen under Windows 7:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-11


The Stilt, post: 38776813, member: 366210
Copy dism.exe, boot.wim & install.wim (sources directory) files from the Windows 7 ISO image to your hard-drive.
Download the USB driver for Ryzen: https://1drv.ms/u/s!Ag6oE4SOsCmDhGIQJdHdaXC-_w-C
Extract the package to the same directory as DISM is located.
Enter the folders containing the individual driver files and check that there is not “Unblock” button visible. If there is, you need to manually toggle it for each and every file.

- “DISM /mount-wim /wimfile:boot.wim /index:2 /mountdir:x:\xxx” (x:\xxx = a temporary path of your choice, make sure to have ~20GB of space available for install.wim).
- “DISM /image:x:\xxx /add-driver /driver:Ryzen_USB_W764\ /recurse /forceunsigned”
- “DISM /unmount-wim /mountdir:x:\xxx /commit”

Windows 7 install.wim files contain four different OS variants, regardless of the officially stated edition of the media you have (Home, Professional, Ultimate).
The index order within the install.wim is always the same, regardless of the edition: Index 1 = Home Basic, Index 2 = Home Premium, Index 3 = Professional, Index 4 = Ultimate.

So if your media is for Professional edition, you need to make the changes to Index 3. If it is a Ultimate media, then make the changes to Index 4, etc.

- “DISM /mount-wim /wimfile:install.wim /index:x /mountdir:x:\xxx” (x:\xxx = a temporary path of your choice, make sure to have ~20GB of space available for install.wim).
- “DISM /image:x:\xxx /add-driver /driver:Ryzen_USB_W764\ /recurse /forceunsigned”
- “DISM /unmount-wim /mountdir:x:\xxx /commit”

After you have added the drivers to both of the WIMs, you can install Windows 7 from a USB drive and using USB keyboard and mouse.
Make sure that you don't use USB ports provided by a 3rd party manufacturer (other than ASMedia), as there are still no drivers for those in the media.
After the installation, install the Relive chipset driver pack for Windows 7 (available at AMD.com).
Posted by watercooled - Sun 05 Mar 2017 17:50
Corky34
Doesn't DX12 allow developers to do their own optimisation and stuff though, I've not bothered looking into what each part of that involves but could that explain AMD's comment about game developers needing to patch their games, so DX12 games need patching while DX11 need an OS patch. :undecided

It depends on what sort of optimisation you're referring to. Things like task scheduling are down to the OS kernel, along with e.g. time slicing, prioritisation and interrupts. Even with DX12, games don't get bare-metal access to the computer hardware with no OS involvement, and that can't happen simply because many processes besides the game need to run alongside it. That's the sort of thing that used to happen on consoles like the 360, where IIRC the dashboard was more akin to a BIOS, and every game included and booted its own kernel, and hardware management was, in that case, the responsibility of the game developers.

I suspect that AMD comment was a bit of a misunderstanding on one side or another. Game developers, regardless of DX11 or 12, are still the ones responsible for compiling their code and, just for example, including vendor-specific codepaths to extract better performance*, choosing compiler flags, what instruction sets they allow the compiler to use, etc.

*A couple of random examples - conventional x86 vs AES-NI for cryptographic processing, different instruction latency for divides vs multiplies where in some cases they can be used interchangeably. And just because an instruction is available, doesn't mean it's the best option e.g. the y-cruncher developer found that using AVX on Bulldozer led to a performance regression for that code vs the SSE3 codepath. That's the sort of thing I think of when code optimisation is brought up.

On the subject of this discussion, the SMT issues are something that seems more to do with the OS scheduler than the games themselves. But given Ryzen is a new uArch, it's to be expected that developers will have the opportunity to learn the idiosyncrasies of the microarchitecture and get more out of it in future games (or patches if it's worth it).
Posted by scaryjim - Sun 05 Mar 2017 18:24
watercooled
It depends on what sort of optimisation you're referring to. Things like task scheduling are down to the OS kernel, along with e.g. time slicing, prioritisation and interrupts. ….

I wonder if one quick and simple optimisation (if this is even possible in Win 10) might be for the game exe to set its own processor priority - so when it detects a Ryzen with more than 8 logical cores it sets a processor affinity that would only use one CCX … that should avoid the cache misses that might occur when Win 10 moves threads around.

Huh, in fact, I wonder if any reviewers have considered trying that to see if it has any effect on game performance… should be straightforward to start a game, run a benchmark, manually adjust the affinity, then rerun the benchmark…
Posted by CAT-THE-FIFTH - Sun 05 Mar 2017 18:28
scaryjim
I wonder if one quick and simple optimisation (if this is even possible in Win 10) might be for the game exe to set its own processor priority - so when it detects a Ryzen with more than 8 logical cores it sets a processor affinity that would only use one CCX … that should avoid the cache misses that might occur when Win 10 moves threads around.

Huh, in fact, I wonder if any reviewers have considered trying that to see if it has any effect on game performance… should be straightforward to start a game, run a benchmark, manually adjust the affinity, then rerun the benchmark…

They already tested it - check my post earlier in the thread. 4+0>>>>>>>2+2.
Posted by scaryjim - Sun 05 Mar 2017 20:17
CAT-THE-FIFTH
They already tested it - check my post earlier in the thread. 4+0>>>>>>>2+2.

Ah, just found it - I missed it in the turn of a page ;)

CAT-THE-FIFTH
Time to buy SJ,a big virtual beer?? :p …

I'll take a small real beer instead, if you're offering :mrgreen:

EDIT: having finally read that review now (had to stop for a takeaway ;) ) I'm a bit disappointed that they didn't test the 4+0 with SMT on/off, would've been an interesting test to see if SMT is still an issue even without the cross-L3 latency. Nonetheless, interesting to see one of the suspected issues tested and demonstrated so easily - if AMD can introduce a CPU driver or MS can release a new Windows scheduler that knows not to move threads off a CCX, that should go a long way towards mitigating some of the gaming performance issues (seriously, that's what, a 10% performance boost just because you're not going across the CCX for L3 cache access…?)
Posted by CAT-THE-FIFTH - Mon 06 Mar 2017 06:26
scaryjim
Ah, just found it - I missed it in the turn of a page ;)



I'll take a small real beer instead, if you're offering :mrgreen:

EDIT: having finally read that review now (had to stop for a takeaway ;) ) I'm a bit disappointed that they didn't test the 4+0 with SMT on/off, would've been an interesting test to see if SMT is still an issue even without the cross-L3 latency. Nonetheless, interesting to see one of the suspected issues tested and demonstrated so easily - if AMD can introduce a CPU driver or MS can release a new Windows scheduler that knows not to move threads off a CCX, that should go a long way towards mitigating some of the gaming performance issues (seriously, that's what, a 10% performance boost just because you're not going across the CCX for L3 cache access…?)

It looks like Hardware.fr has done some more testing with SMT off.




But here is the big kicker!!



They tested in performance mode and found not all the regressions were actually only down to SMT! ;)
Posted by kalniel - Mon 06 Mar 2017 08:38
Core parking was found to slow down intel chips back in the day as well, surprised it still causes problems.
Posted by scaryjim - Mon 06 Mar 2017 09:42
CAT-THE-FIFTH
… They tested in performance mode and found not all the regressions were actually only down to SMT! ;)

Ugh, too many variables!

So, BF1, Project Cars and Civ 6 all hit the caches hard - they're the titles with the biggest regression when you split the cores 2+2. Gaming's always been quite cache sensitive anyway, so that's not a huge surprise.

Warhammer and Watch Dogs 2 are the only games were SMT causes a significant performance degradation in balanced power mode. All other games show similar or greater performance in Performance mode with SMT on as in balanced mode with SMT off. Of course, we don't have figures for performance mode with SMT off, which might show that SMT is also contributing to the regression…

Yep, too many variables. It looks like there are all sorts of optimisations that could pick up 1% here and there, and those 1% can start to add up quickly…

kalniel
Core parking was found to slow down intel chips back in the day as well, surprised it still causes problems.

I'm surprised it seems to make up so much of the difference when switching to Performance mode, given AMD's insistence that the problem with Balanced mode is that it slows down the clock and voltage changes - in fact even more so, since they initially blamed core parking then “clarified” the response…
Posted by ik9000 - Mon 06 Mar 2017 13:50
A bit late to the thread, sorry if this is covered already but am I missing something or does even the high-end Zen only have 16 PCIe3 Lanes? Seems a bit small for a new architecture/chipset… Hopefully I've got this wrong.
Posted by CAT-THE-FIFTH - Mon 06 Mar 2017 14:07
ik9000
A bit late to the thread, sorry if this is covered already but am I missing something or does even the high-end Zen only have 16 PCIe3 Lanes? Seems a bit small for a new architecture/chipset… Hopefully I've got this wrong.

I don't think it is that much of a biggie since you need to buy a £600 Core i7 6850K to get any more on the Intel side.

Also,Ryzen is an SOC so the PCI-E controller is in the CPU itself which actually might be an advantage.]

scaryjim
Ugh, too many variables!

So, BF1, Project Cars and Civ 6 all hit the caches hard - they're the titles with the biggest regression when you split the cores 2+2. Gaming's always been quite cache sensitive anyway, so that's not a huge surprise.

Warhammer and Watch Dogs 2 are the only games were SMT causes a significant performance degradation in balanced power mode. All other games show similar or greater performance in Performance mode with SMT on as in balanced mode with SMT off. Of course, we don't have figures for performance mode with SMT off, which might show that SMT is also contributing to the regression…

Yep, too many variables. It looks like there are all sorts of optimisations that could pick up 1% here and there, and those 1% can start to add up quickly…



I'm surprised it seems to make up so much of the difference when switching to Performance mode, given AMD's insistence that the problem with Balanced mode is that it slows down the clock and voltage changes - in fact even more so, since they initially blamed core parking then “clarified” the response…

I told you it was more like death by a 1000 cuts.
Posted by ik9000 - Mon 06 Mar 2017 16:12
CAT-THE-FIFTH
I don't think it is that much of a biggie since you need to buy a £600 Core i7 6850K to get any more on the Intel side.

No, that is a biggy. A high end CPU ought to offer more. You get 28 Lanes with the i7-5820K at £380.

They're saying they only want to target systems who need limited bandwidth. A multicore CPU, DDR4 but only 1GPU at full speed and no allowance for PCIe cards for NVMe drives, adding USB3.1 or the like.
Posted by kalniel - Mon 06 Mar 2017 16:29
ik9000
A bit late to the thread, sorry if this is covered already but am I missing something or does even the high-end Zen only have 16 PCIe3 Lanes? Seems a bit small for a new architecture/chipset… Hopefully I've got this wrong.

It has 24 lanes, cf 16 for Intel's 7700K.
Posted by CAT-THE-FIFTH - Mon 06 Mar 2017 17:29
ik9000
No, that is a biggy. A high end CPU ought to offer more. You get 28 Lanes with the i7-5820K at £380.

They're saying they only want to target systems who need limited bandwidth. A multicore CPU, DDR4 but only 1GPU at full speed and no allowance for PCIe cards for NVMe drives, adding USB3.1 or the like.

It isn't really since most multi-GPU systems on forums are using consumer socket CPUs like the Core i7 7700K,and 28 lanes on a Core i7 6800K won't give you 16X/16X dual cards. Ryzen has 24.

Hence,most multi-card Intel systems will be running cards at 8X/8X meaning you sill have plenty of lanes left for other stuff with AM4.

Then add the fact,anything over two cards scales poorly anyway and more and more games don't really do well with multiple cards,since they are developed for consoles first.
Posted by scaryjim - Mon 06 Mar 2017 17:56
ik9000
… A multicore CPU, DDR4 but only 1GPU at full speed and no allowance for PCIe cards for NVMe drives, adding USB3.1 or the like.

The SoC supports an x4 NVMe drive and USB3.1 Gen 2 natively on top of the x16 cluster for GPUs. The AM4 chipset supports at least one additional USB 3.1 Gen 2 and 2 USB 3.1 Gen 1 ports, plus various configs of SATA and SATA Express ports, and a cluster of PCIe 2.0 endpoints for peripherals. And when you're not using a chipset the Gen 3 x4 cluster that normally connects to the chipset is available for hanging other peripherals off.

In other words, it's got plenty of allowance for NVMe drives, USB 3.1 ports etc.

AM4 is a mainstream consumer/prosumer platform intended to compete in the same market as s115X. For people who must have more PCIe lanes there will be a workstation/server platform under the Opteron branding…
Posted by CAT-THE-FIFTH - Mon 06 Mar 2017 18:14
scaryjim
The SoC supports an x4 NVMe drive and USB3.1 Gen 2 natively on top of the x16 cluster for GPUs. The AM4 chipset supports at least one additional USB 3.1 Gen 2 and 2 USB 3.1 Gen 1 ports, plus various configs of SATA and SATA Express ports, and a cluster of PCIe 2.0 endpoints for peripherals. And when you're not using a chipset the Gen 3 x4 cluster that normally connects to the chipset is available for hanging other peripherals off.

In other words, it's got plenty of allowance for NVMe drives, USB 3.1 ports etc.

AM4 is a mainstream consumer/prosumer platform intended to compete in the same market as s115X. For people who must have more PCIe lanes there will be a workstation/server platform under the Opteron branding…

More importantly Intel and its stupid product segmentation helps AMD - you need to spend £600 on a Core i7 6850K to get above 28 lanes,and unless you really want to push the overclock on a R7 1800X,the B350 motherboards are far cheaper than an X99 one and AFAIK support XFire.
Posted by scaryjim - Mon 06 Mar 2017 18:19
CAT-THE-FIFTH
… the B350 motherboards are far cheaper than an X99 one and AFAIK support XFire.

Officially only the X370 and X300 support 2 x8 PCIe for GPUs (see the specs tab on http://www.amd.com/en-us/products/chipsets/am4), however the 970 chipset for AM3 didn't officially support Crossfire either but boards often claimed to ;)

I suspect some mobo manufacturers will provide a second physical x16 PCIe slot running at x4 (potentially only PCIe 2.0 x4 at that) and claim to be crossfire compatible…

I find it interesting that X300 supports 2 x8 slots though - that must mean AMD are anticipating it being used in mATX boards as well as mITX (since mITX doesn't have room for 2 slots!). And since we now have confirmation that X300/A300 can use the x4 that usually connects to the chipset, that actually makes a lot of sense…

EDIT: just realised I made a mistake in my last post - the SoC only supports USB 3.1 Gen 1, not Gen 2. Still, 4 USB 3 hanging directly off the CPU isn't bad, is it ;)
Posted by CAT-THE-FIFTH - Mon 06 Mar 2017 18:30
scaryjim
Officially only the X370 and X300 support 2 x8 PCIe for GPUs (see the specs tab on http://www.amd.com/en-us/products/chipsets/am4), however the 970 chipset for AM3 didn't officially support Crossfire either but boards often claimed to ;)

I suspect some mobo manufacturers will provide a second physical x16 PCIe slot running at x4 (potentially only PCIe 2.0 x4 at that) and claim to be crossfire compatible…

I find it interesting that X300 supports 2 x8 slots though - that must mean AMD are anticipating it being used in mATX boards as well as mITX (since mITX doesn't have room for 2 slots!). And since we now have confirmation that X300/A300 can use the x4 that usually connects to the chipset, that actually makes a lot of sense…

EDIT: just realised I made a mistake in my last post - the SoC only supports USB 3.1 Gen 1, not Gen 2. Still, 4 USB 3 hanging directly off the CPU isn't bad, is it ;)

OK,unofficially - but if you need 8 cores,an R7 1700 for £320 plus a £100 B350 is barely the price of a Core i7 6800K itself.

Also,apparently people are getting decent overclocks on the R7 1700 using the STOCK COOLER.

It seems the Wraith Spire is not too bad at all.
Posted by scaryjim - Mon 06 Mar 2017 18:40
CAT-THE-FIFTH
… if you need 8 cores,an R7 1700 for £320 plus a £100 B350 is barely the price of a Core i7 6800K itself.

Also,apparently people are getting decent overclocks on the R7 1700 using the STOCK COOLER. ….

Yeah, as usual the lower power variants are the interesting ones (remember the Phenom II X3 720e and X4 910e? ;) ). Really looking forward to the quad core processors now… :D
Posted by ByteMyAscii - Mon 06 Mar 2017 18:51
Intel have had multiple generations to improve upon it.
Is there any information about on how the early implementations of Hyperthreading impacted performance.
I highly doubt that was all perfect either.
Posted by CAT-THE-FIFTH - Mon 06 Mar 2017 18:51
scaryjim
Yeah, as usual the lower power variants are the interesting ones (remember the Phenom II X3 720e and X4 910e? ;) ). Really looking forward to the quad core processors now… :D

Well,I don't know still - saw some testing on a forum,where somebody tested FO4 under Windows 7 using a save shared amongst the forum members,and performance seems a bit meh,and the person tried the CCX tricks too! :( It seems inline with the Sweclockers review results which placed it at SB level. It also means by extension the same for Skyrim and possibly some other games too.

Its not bad even then,but since I am on a IB Core i7 already,I really don't know.

I think I will wait and see how Zen+ stacks out.
Posted by ik9000 - Mon 06 Mar 2017 19:08
scaryjim
The SoC supports an x4 NVMe drive and USB3.1 Gen 2 natively on top of the x16 cluster for GPUs. The AM4 chipset supports at least one additional USB 3.1 Gen 2 and 2 USB 3.1 Gen 1 ports, plus various configs of SATA and SATA Express ports, and a cluster of PCIe 2.0 endpoints for peripherals. And when you're not using a chipset the Gen 3 x4 cluster that normally connects to the chipset is available for hanging other peripherals off.

In other words, it's got plenty of allowance for NVMe drives, USB 3.1 ports etc.

According to the AMD website even on the enthusiast X370 AM4 chipset to run NVMe at x4 you then only get 4xSATA III ports remaining. Not enough for my current system.

But presumably it's supposed to be ok cos there's SATA express lanes to give 2x2=4 extra sata III lanes, but at the loss of 2 PCI3x1.

There are 2xUSB3.1, 10xUSB 3.0 and 6xUSB2. That's masses! Is USB3 faster or slower than SATA3? Can these be adapted? Are there any SATA2 (BDR and HDD don't need SATA3)? Presume not as it doesn't say there is. Presumably these USB3 channels are for external ports?

If only they had a few spare PCIe lanes. You can have 2No PCIe3 lanes in lieu of the sata express, but only if you don't want those extra HDD or optical drives, card readers etc. Then there's 1No PCEI2 x8 and 2No PCEI2x1. But those are only available if you don't use NVMe. So just a PCE2x8 left. Bit of a waste putting a storage controller onto that just to get more SATA ports. So what about the soundcard and recording interface? Those would go….?

Best I can see is using NVMe, taking the 2 PCI3x1 for soundcards, interface, (so no extra SATAIII on board) and then using a card in the PCIe2x8 port to give extra SATAIII ports. That assumes the GPUs don't block access to any of them!

I wouldn't call the default 1xNVME and 4 SATA III ports plenty.
Posted by Xlucine - Mon 06 Mar 2017 19:29
Look for a SATA port multiplier, turns a SATA3 port into several sata1/2 ports - plenty of bandwidth for HDDs. Yours is a pretty weird use case, it'll be handled better by the opteron stuff
Posted by kalniel - Mon 06 Mar 2017 19:36
ik9000
According to the AMD website even on the enthusiast X370 AM4 chipset to run NVMe at x4 you then only get 4xSATA III ports remaining. Not enough for my current system.

But presumably it's supposed to be ok cos there's SATA express lanes to give 2x2=4 extra sata III lanes, but at the loss of 2 PCI3x1.

There are 2xUSB3.1, 10xUSB 3.0 and 6xUSB2. That's masses! Is USB3 faster or slower than SATA3? Can these be adapted? Are there any SATA2 (BDR and HDD don't need SATA3)? Presume not as it doesn't say there is. Presumably these USB3 channels are for external ports?

If only they had a few spare PCIe lanes. You can have 2No PCIe3 lanes in lieu of the sata express, but only if you don't want those extra HDD or optical drives, card readers etc. Then there's 1No PCEI2 x8 and 2No PCEI2x1. But those are only available if you don't use NVMe. So just a PCE2x8 left. Bit of a waste putting a storage controller onto that just to get more SATA ports. So what about the soundcard and recording interface? Those would go….?

Best I can see is using NVMe, taking the 2 PCI3x1 for soundcards, interface, (so no extra SATAIII on board) and then using a card in the PCIe2x8 port to give extra SATAIII ports. That assumes the GPUs don't block access to any of them!

I wouldn't call the default 1xNVME and 4 SATA III ports plenty.
Look at Intel boards and you'll find very similar limitations. What most manfacturers do is add an asmedia chip and add more ports to that. Take the board I'm looking at - Asrock X370 Taichi - 8 sata III ports from the CPU, 2 sata 3 from the ASMedia chip, and one ultra M.2 socket. All running at the same time (there are additional sockets which would require pinching PCIe bandwith.

There's loads of lanes - 8 more than Intel's gaming chips do with.
Posted by watercooled - Mon 06 Mar 2017 22:19
ik9000
No, that is a biggy. A high end CPU ought to offer more. You get 28 Lanes with the i7-5820K at £380.

They're saying they only want to target systems who need limited bandwidth. A multicore CPU, DDR4 but only 1GPU at full speed and no allowance for PCIe cards for NVMe drives, adding USB3.1 or the like.
As an aside to what's already been covered with regards to total IO bandwidth, just talking about x16 for GPUs - people make it out to be far more of a big deal than it really is. x8 is demonstrably plenty to keep a modern high-end GPU fed: https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_PCI_Express_Scaling/24.html

16 lanes of PCIe 3.0 is a *LOT* of bandwidth for desktop use. On the server side, as AMD are using multi-die MCM parts, they're able to make use of the IO on all of the dies. E.g. with Naples they can offer 64 lanes per CPU, vs 40 from Intel's Xeon.
Posted by scaryjim - Mon 06 Mar 2017 23:03
ik9000
According to the AMD website even on the enthusiast X370 AM4 chipset to run NVMe at x4 you then only get 4xSATA III ports remaining. Not enough for my current system. …

Have you actually checked the specs of any motherboards? The setup is hugely flexible, and there's lots of boards out there with lots of different setups. What do you actually *need*?

ASUS' Crosshair VI Hero and Prime X370-Pro (see this Hexus article) both offer 2 PCIe 3 slots for graphics and 4 PCIe 2 slots for peripherals, on top of 8 SATA and NVMe x4….
Posted by watercooled - Tue 07 Mar 2017 00:26
Just watched this video over on Youtube: https://www.youtube.com/watch?v=ylvdSnEbL50

IMO a very level-headed look at Ryzen, particularly gaming performance, and highlights some of the faulty reasoning for using very low resolutions as ‘forward looking’ benchmarks. TBH it's something I've sort-of noticed in recent benchmarks but not really had it register - Piledriver has begun closing the gap or outperforming the 2500k it was pitted against in gaming benchmarks on its release, and again some places were playing down ‘real-world’ benchmarks (i.e. benchmarking games at settings you'll actually use), claiming that the performance difference would become worse as games became ‘more demanding’ in the future, or as more powerful GPUs became available. That simply hasn't happened. It seems it's not quite that easy to extrapolate future gaming performance.

Edit: Just to be clear I'm not ignoring e.g. SMT scaling in games, or *actual* gaming performance. Just saying, be careful how much you read into results.
Posted by kalniel - Tue 07 Mar 2017 11:15
watercooled
Just watched this video over on Youtube: https://www.youtube.com/watch?v=ylvdSnEbL50

IMO a very level-headed look at Ryzen, particularly gaming performance, and highlights some of the faulty reasoning for using very low resolutions as ‘forward looking’ benchmarks.

They're not used as forward looking benchmarks for the most part, just CPU performance comparisons which is valid. There are too many variables such as drivers and game profile changes that you can't predict to use them as ‘forward looking’.
Posted by watercooled - Tue 07 Mar 2017 22:45
Oh I don't disagree, but some people definitely do interpret low-resolution results as forward looking, even if reviews don't explicitly claim that (but I can think of a couple of sites using more than a dash of implication).

Using them to extrapolate general CPU performance is obviously not sensible either, as Ryzen's current gaming performance relative to Intel certainly does not correlate with performance in other areas. This difference is the reason people found the results strange, and why so many immediately looked into what was causing it.

IMO, testing at 1080p (i.e. not just 1440p+) is reasonable, and direct performance comparisons are valid, but much below that is not really relevant as few people are likely spend that amount on a CPU to play games at 720p. And as mentioned in the video, looking at some older gaming and picking fault with e.g. 250 vs 300fps is a bit daft really.

On another subject which seems to be getting lost in the commotion - AMD have really impressed me with Ryzen's power efficiency - and it's not like Ryzen is really low-clocked either, especially given its core count. Ryzen trades blows with both Broadwell-E and Kaby Lake depending on the application e.g. even though Ryzen's peak power is higher than Kaby Lake's, it is substantially faster at some tasks and can finish and return to idle faster, and idle power sits somewhere between the two platforms, but usually closer to Kaby Lake depending on the motherboard used.

I suppose it's not now surprising given AMD seemed to be sandbagging a bit on performance (perhaps unintentionally at first), but think back to these slides - AMD were, aside from avoiding clock speeds at the time and leading to negative speculation over final clocks, claiming “At = Energy Per Cycle”. They're comparing to Excavator so we don't have an ideal reference, but they seem to have easily achieved (or exceeded) that goal going on some ballpark comparisons to the Carrizo Athlons. But that's not to take away from what was achieved with Excavator either - for an iterative microarchitecture over Bulldozer it's fantastically more efficient whilst still being stuck on 28nm. It's a real shame that work didn't find it to the desktop market in a significant way, but in a way I guess it laid a lot of groundwork for Ryzen to build on WRT on-die power management for instance.

Two more major places AMD has struggled to compete in the past - single-thread, clock-for-clock, and per-thread (in multithreaded applications) performance are all competitive. This isn't just an 8 core CPU competing with a 4 core only in multithreaded apps, and significantly behind when the core count is similar, it's properly competitive across a range of applications, even beating the 10 core Intel CPU in some areas. This sort of thing just wasn't what we saw with Piledriver. I'll still say Piledriver made for a solid CPU on release for many scenarios, but there were compromises you had to factor in, chiefly what I've mentioned here.
Posted by CAT-THE-FIFTH - Tue 07 Mar 2017 23:09
watercooled
Oh I don't disagree, but some people definitely do interpret low-resolution results as forward looking, even if reviews don't explicitly claim that (but I can think of a couple of sites using more than a dash of implication).

Using them to extrapolate general CPU performance is obviously not sensible either, as Ryzen's current gaming performance relative to Intel certainly does not correlate with performance in other areas. This difference is the reason people found the results strange, and why so many immediately looked into what was causing it.

IMO, testing at 1080p (i.e. not just 1440p+) is reasonable, and direct performance comparisons are valid, but much below that is not really relevant as few people are likely spend that amount on a CPU to play games at 720p. And as mentioned in the video, looking at some older gaming and picking fault with e.g. 250 vs 300fps is a bit daft really.

On another subject which seems to be getting lost in the commotion - AMD have really impressed me with Ryzen's power efficiency - and it's not like Ryzen is really low-clocked either, especially given its core count. Ryzen trades blows with both Broadwell-E and Kaby Lake depending on the application e.g. even though Ryzen's peak power is higher than Kaby Lake's, it is substantially faster at some tasks and can finish and return to idle faster, and idle power sits somewhere between the two platforms, but usually closer to Kaby Lake depending on the motherboard used.

I suppose it's not now surprising given AMD seemed to be sandbagging a bit on performance (perhaps unintentionally at first), but think back to these slides - AMD were, aside from avoiding clock speeds at the time and leading to negative speculation over final clocks, claiming “At = Energy Per Cycle”. They're comparing to Excavator so we don't have an ideal reference, but they seem to have easily achieved (or exceeded) that goal going on some ballpark comparisons to the Carrizo Athlons. But that's not to take away from what was achieved with Excavator either - for an iterative microarchitecture over Bulldozer it's fantastically more efficient whilst still being stuck on 28nm. It's a real shame that work didn't find it to the desktop market in a significant way, but in a way I guess it laid a lot of groundwork for Ryzen to build on WRT on-die power management for instance.

Two more major places AMD has struggled to compete in the past - single-thread, clock-for-clock, and per-thread (in multithreaded applications) performance are all competitive. This isn't just an 8 core CPU competing with a 4 core only in multithreaded apps, and significantly behind when the core count is similar, it's properly competitive across a range of applications, even beating the 10 core Intel CPU in some areas. This sort of thing just wasn't what we saw with Piledriver. I'll still say Piledriver made for a solid CPU on release for many scenarios, but there were compromises you had to factor in, chiefly what I've mentioned here.

I agree with many of these points,but I wish some of tech tubers also realise just because a title is old,it does not means even a SB CPU can hit 60FPS in some titles. I give you an example - Planetside2. With an IB Xeon E3 1230 V2,at qHD with a GTX1080 I can just about hit 60FPS on max settings with shadows turned down just walking around in an area with not much happening.

Now,during some of the big battles with 200ish people I have hit like 25FPS to 30FPS at times,and the game does not thread well on PC. Some of its server,but also it just slams the CPU hard. Planetside 2 came out 5 years ago,and it has gotten better as time progressed(it uses more cores than when it first released),but its the kind of game like WoW,which won't suddenly be scaling to 8 cores anytime soon. They managed to port it to the PS4 to use six threads,and it still seems to have issues,but there is no noise on whether they will bother anytime soon for the PC port especially since the studio used to be owned by Sony but now is independent. There are a few other titles like FO4 which are the same,etc but most will never test Ryzen in these scenarios which were amongst the worst for AMD CPUs like the FX8350.

The problem is since so many of them are just looking at newer titles they forget AMD tended to do even reasonably well with Piledriver in newer titles.



Older engines tended to hit AMD the worst in games.

People need to just think a little here - if SB can hit 60FPS with a good enough card in all games,then is there any point to get even a Ryzen CPU even if it has more cores?

People would not change from a SB CPU to a 8C core Ryzen or a 4C core Kaby Lake CPU only for gaming if they were doing fine?? They are upgrading since they lack performance.

Plus if they are fine and want MOAR cores,then it would make more sense to get Ryzen 2 which will improve in all areas. Piledriver and the Phenom II were decent improvements over their predecessors.
Posted by watercooled - Wed 08 Mar 2017 01:04
CAT-THE-FIFTH
I agree with many of these points,but I wish some of tech tubers also realise just because a title is old,it does not means even a SB CPU can hit 60FPS in some titles. I give you an example - Planetside2. With an IB Xeon E3 1230 V2,at qHD with a GTX1080 I can just about hit 60FPS on max settings with shadows turned down just walking around in an area with not much happening.
Yeah I specifically gave the 250-300fps thing as an example of the type of ‘old’ (or TBH, even some new) games I was referring to. Of course, there are still some older games (and, again, some newer ones) that don't perform well. I never noticed the PS2 example, but bear in mind I've not had time to read every post of every Ryzen thread so I'm not intentionally ignoring posts - I'm mostly skimming and aiming to reply to at least posts directed at me. I can't think of any examples that bad that I play myself though - even my 1055T struggles on surprisingly few games. Games like this seem to be in a minority now though, and won't impact many people, but if it's something you're aware of for a particular game you play, then like I've been saying - look for real benchmarks (if they exist) and be very careful when extrapolating from other results.

CAT-THE-FIFTH
People need to just think a little here - if SB can hit 60FPS with a good enough card in all games,then is there any point to get even a Ryzen CPU even if it has more cores?
Does SB achieve that? Of course, it depends on the games, the performance of those games, and what else the system is used for. And as we're seeing, an increasing number of games are more heavily threaded so it's not like more cores is totally orthogonal to gaming performance. Even with the SMT thing, several reviewers have noted advantages such as subjectively less stuttering, tight frame times, and much lower core loading demonstrably allowing for e.g. streaming in the background with far less gaming performance degradation. Games like BF1 are capable of pegging all 8 threads of a 7700k at close to 100% already.

CAT-THE-FIFTH
People would not change from a SB CPU to a 8C core Ryzen or a 4C core Kaby Lake CPU only for gaming if they were doing fine?? They are upgrading since they lack performance.

Plus if they are fine and want MOAR cores,then it would make more sense to get Ryzen 2 which will improve in all areas. Piledriver and the Phenom II were decent improvements over their predecessors.
But what if they aren't fine? You make it sound like more cores is a bad thing - depending on what you're doing it can be by far the best option and a substantial upgrade over any existing 4C system. Ryzen 2 is still an unknown amount of time off in the future - there will always be something better if you wait long enough, and it's no reason to put off upgrading if it's already worthwhile.

I think some people are looking for *THE BEST* CPU. There simply isn't any one CPU that fulfils that criteria.

It's like the mobile core count debate - you have tons of people still claiming beyond x amount of cores is silly and useless for a mobile phone. It really isn't that simple though. While higher performance cores are indispensable in some areas, the difference in performance between many high-performance cores is negligible in use and overshadowed by software, and many cores are put to use in more areas than you'd imagine. As an example, I've noticed that with ART, whilst installing many Store apps on Android, what is presumably the AoT compilation simultaneously maxes 4 A72 cores. Many popular apps are well-threaded to good effect too.
Posted by CAT-THE-FIFTH - Wed 08 Mar 2017 01:54
watercooled
Yeah I specifically gave the 250-300fps thing as an example of the type of ‘old’ (or TBH, even some new) games I was referring to. Of course, there are still some older games (and, again, some newer ones) that don't perform well. I never noticed the PS2 example, but bear in mind I've not had time to read every post of every Ryzen thread so I'm not intentionally ignoring posts - I'm mostly skimming and aiming to reply to at least posts directed at me. I can't think of any examples that bad that I play myself though - even my 1055T struggles on surprisingly few games. Games like this seem to be in a minority now though, and won't impact many people, but if it's something you're aware of for a particular game you play, then like I've been saying - look for real benchmarks (if they exist) and be very careful when extrapolating from other results.


Does SB achieve that? Of course, it depends on the games, the performance of those games, and what else the system is used for. And as we're seeing, an increasing number of games are more heavily threaded so it's not like more cores is totally orthogonal to gaming performance. Even with the SMT thing, several reviewers have noted advantages such as subjectively less stuttering, tight frame times, and much lower core loading demonstrably allowing for e.g. streaming in the background with far less gaming performance degradation. Games like BF1 are capable of pegging all 8 threads of a 7700k at close to 100% already.

Its more the case,as a person who has been on socket 1155 since it launched,SB/IB are starting to hit limitations more and more in some titles and their crap engines. I recently started using a faster card,and what I originally thought were GPU limitations,were actually CPU limitations in PS2,Skyrim and FO4,especially in minimums. Its why some of the people saying SB/IB level in such games is OK(since Ryzen can be that level in older ones),is slightly annoying since its obviously not enough now and SKL/KL have actually decent minimums. People might mock the tiny percentage improvements in IPC,but after a few generations it has built up,as has the fact for a person who does not overclock that much,stock clockspeeds have risen and so has memory bandwidth. So it can actually mean over 30% to 40% improvements maybe even more if you run higher speed RAM. Now if you hit 20FPS minimums,that is over 30FPS,or if you have 40FPS minimums,that is close to 60FPS constant.

I mean I really hope that maybe its the beta nature of the BIOSes gimping performance(memory comes to mind),as mentioned here in that article I linked to - also part of the issue is also some of the RTS games we play at LANs are more of the same,so whereas chasing single core performance might ultimately fruitless,I still would have expected an upgrade over what I have. Its why I am less enthused about it.

It might do better,but the jury is out until we get more numbers in a month.

The newest games I have played are ROTTR,Deus Ex:MD and Doom. ROTTR I finished a year ago since its short,Doom ran fine on my previous card since its well optimised and I got bored of Deus Ex:MD,and it was like Crysis 4 since it could push the card. The Witcher 3 was also fine.

All of those newish games ran fine on an oldish CPU(so will probably run very well on even a 4C/8T Ryzen CPU I suspect),but were more GPU heavy outside DOOM.

I only found titles on oldish engines pushed my CPU - its because half the CPU is doing nothing.

watercooled
But what if they aren't fine? You make it sound like more cores is a bad thing - depending on what you're doing it can be by far the best option and a substantial upgrade over any existing 4C system. Ryzen 2 is still an unknown amount of time off in the future - there will always be something better if you wait long enough, and it's no reason to put off upgrading if it's already worthwhile.

I think some people are looking for *THE BEST* CPU. There simply isn't any one CPU that fulfils that criteria.

It's like the mobile core count debate - you have tons of people still claiming beyond x amount of cores is silly and useless for a mobile phone. It really isn't that simple though. While higher performance cores are indispensable in some areas, the difference in performance between many high-performance cores is negligible in use and overshadowed by software, and many cores are put to use in more areas than you'd imagine. As an example, I've noticed that with ART, whilst installing many Store apps on Android, what is presumably the AoT compilation simultaneously maxes 4 A72 cores. Many popular apps are well-threaded to good effect too.

Well its more the case,I think its great AMD sells 8 reasonably fast cores for £320,and so was your Phenom II X6 in its day too. But I think what has been missed in the whole Ryzen launch is that plenty of workloads still suit less cores - and I think some of the accusations that reviewers are trying to make Ryzen look worse are unfair since if what they are testing is that way,what can they do??

Some of these random accusations don't consider some review sites have actually been quite fair to AMD in the past. Sweclockers was probably the only website to test AMD and Intel CPUs under Mantle in BF4 MP but with 64 players in largish maps,to show well it could perform over DX11 for example. In fact they did say Ryzen had lower than Intel gaming performance,but actually showed how in some titles it was a bit meh but could gain ground in newer titles where it could get closer to a Core i7 4790K or Core i7 6900K but could also below Core i7 3770K performance. Some titles showed the Core i7 7700K storming ahead and others it wasn't.

OFC you are going to get reviewers of varying competency and bias,but seriously I really think this social media spread “conspiracy” that reviewers are against AMD is just nuts - its a way to quietly attack the media for reviews which are less than 100% positive when AMD has said they need to improve performance,and we remember what happened in the past. Seriously they are different designs,and just like an FX8350 had different strengths and weaknesses,or a Phenom II.
Posted by mapesdhs - Wed 08 Mar 2017 02:44
CAT-THE-FIFTH
I don't think it is that much of a biggie since you need to buy a £600 Core i7 6850K to get any more on the Intel side.

Or just get a used X79 setup. ;D Last 3930K I bought only cost me 82, and a P9X79-E WS to go with it for under 200. Earlier, I bagged a used 4960X and P9X79-E WS for just over 400, hard to beat value-wise. And remember: even the 4c 4820K has 40 PCIe lanes, and it can oc like crazy give the high socket TDP (higher thermal limit per core). I won a cheap 4820K, and for 113 BIN an ASUS R4E to go with it.

NB: I'm kinda half kidding here. The CPUs are dirt cheap now, but of course the problem is finding a good mbd. For whatever reason, people seem to be valuing used X79 mbds a lot more now than was the case a year or two ago, and there are fewer around (refurb brokers were dumping loads of good boards 2 years ago). In May 2015 I built my brother an X79 setup with a used ASUS P9X79 Deluxe which only cost 75 off ebay; yesterday the same model on ebay went for 160. Oh, just upgraded his GPU with a used 980 for 178. :)


CAT-THE-FIFTH
Its not bad even then,but since I am on a IB Core i7 already,I really don't know.

Just curious, which one? The 3770K? Did you delid it? I've seen some amazing oc improvements in YT vids by those who replaced the TIM (one guy had a 35C load temp drop with a better TIM, after which it behaved more like SB temps-wise).



watercooled
Does SB achieve that? …

As a good guide, a 5GHz 2700K gives the same threaded performance as a stock 6700K (scores 880 for CB R15). I've not yet sorted the 1-thread scores for CB 11.5 and R15 on my system, but for CB 10 it scores 8294; this compares to stock scores for other CPUs of:

7700K: 9469
6700K: 8852
7600K: 8748
6700: 8400
6900K: 7717
1700X: 7649

The difference here vs. the 6700K probably explains the minor knock back in gaming performance compared to using the same GPU with a 6700K, even though the threaded CB R15 score is identical. Of course the 6700K can be oc'd aswell, but that's a whole other ball game. :)

Re gaming, a year ago I built a system for a friend using auctioned SB/P67 parts (ASUS M4E, 5GHz 2700K, almost the same as my rig); the huge 200 UKP cost saving vs. a new 6700K allowed him to afford a new 980 Ti instead of a 980, which obviously meant better gaming performance overall. As mentioned above, there is a hit compared to using the same GPU on a newer platform, but it's pretty small. I don't think I can post URLs, but the Firestrike submission number is 8231433; a 2700K @ 5GHz gives a FS Physics score of 12842, ie. 40.77fps vs. 57.34fps for a 1700X. Of course the 1700X has a nice power consumption advantage, though when using top-end GPUs I don't think people really care (NB: SB has such good temps, just a simple TRUE and one NF-P12 is enough to run at 5.0 with good temps and low noise). For comparison, a 3930K @ 4.8 gives a Physics score of 17430, ie. 55.34fps, just a little behind the 1700X (FS submission no. 3988087).

This more than anything else is what complicates matters for some existing SB/SB-E owners, namely SB/E was incredibly easy to oc, ie. many are sitting on what are still very potent systems even though the base chipset lacks things like M.2, etc. If one could boot from NVMe on older chipsets, I suspect quite a few users wouldn't upgrade at all. Even then, with 40 PCIe lanes to play with on X79 no matter which CPU is fitted (including the 4c 4820K), one can fit an SM951/SM961 and at least use that as a data drive to hold game data, or as a cache drive for video editing, etc. For me, the whole 28 lane restriction on later CPUs was a huge step backwards, especially for those who cared more about multi-GPU compute rather than lots of CPU cores (RT3D under After Effects is a good example).

Ian.

PS. For giggles, have a look at Firestrike submission no. 4099529. I hold most of the 3DMark records for the P55 platform using this system. :D Handles multiple 980s a lot better than one might expect, but then P55 was a nice low latency design. 980 SLI and 7970 CF on an oc'd i5 760 also worked quite nicely. I'm looking forward to testing these with a 1080 Ti. :D
Posted by Bagnaj97 - Wed 08 Mar 2017 10:27
CAT-THE-FIFTH
The newest games I have played are ROTTR,Deus Ex:MD and Doom. ROTTR I finished a year ago since its short,Doom ran fine on my previous card since its well optimised and I got bored of Deus Ex:MD,and it was like Crysis 4 since it could push the card. The Witcher 3 was also fine.

All of those newish games ran fine on an oldish CPU(so will probably run very well on even a 4C/8T Ryzen CPU I suspect),but were more GPU heavy outside DOOM.

I only found titles on oldish engines pushed my CPU - its because half the CPU is doing nothing.

It took me longer than I care to admit to realise that ROTTR wasn't Rise Of The Triad. I was very confused that you were struggling to run such an old title!
Posted by CAT-THE-FIFTH - Wed 08 Mar 2017 11:23
I have a Xeon E3 1230 V2/Core i7 3770 which is locked and stays between 3.5GHZ to 3.7GHZ,so if you had a 5GHZ Core i7 2700K you would be doing far better. I also have a mini-ITX system. One thing SKL/KL do better is the memory controller - going from 2133MHZ to 3000MHZ RAM can yield upto 20% improvements in performance. In fact there are one or two hints Ryzen might be that way too - its why I hope in a month or two BIOSes might improve - if that could yield a similar improvement in older games,it might change my perception of things.

Bagnaj97
It took me longer than I care to admit to realise that ROTTR wasn't Rise Of The Triad. I was very confused that you were struggling to run such an old title!

LOL,even though ROTTR can push the CPU more in some areas,its like all of those games, an IB Core i7 seemed less of a limiting factor than the graphics card,and as you know Doom seems very well optimised. By extension a 4C/8T Ryzen would probably be fine too(once SMT,etc is sorted).

Its these blasted titles running on older engines which are where I see the CPU being pushed.
Posted by CAT-THE-FIFTH - Wed 08 Mar 2017 11:37
Ryzen 2 early next year it seems:

https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.pcgameshardware.de%2FAMD-Zen-Codename-261795%2FNews%2FZen-2-Pinnacle-Ridge-Release-Termin-1222222%2F&edit-text=&act=url

I think that is where it will be at TBH.
Posted by CAT-THE-FIFTH - Wed 08 Mar 2017 17:07
Another day another potential issue which might have hampered performance at launch:

http://hwbot.org/newsflash/4335_ryzen_platform_affected_by_rtc_bias_w88.110_not_allowed_on_select_benchmarks

Edit!!

der8auer
Just ban Gigabyte boards for using old AGESA and make mainboard tab mandatory for AM4 submissions. Not sure about MSI or ASRock but it's not possible to do this on ASUS.

I seriously think with any AMD launch you need +3 months added to the launch to actually get an accurate picture!! :p
Posted by watercooled - Wed 08 Mar 2017 19:08
CAT-THE-FIFTH
OFC you are going to get reviewers of varying competency and bias,but seriously I really think this social media spread “conspiracy” that reviewers are against AMD is just nuts - its a way to quietly attack the media for reviews which are less than 100% positive when AMD has said they need to improve performance,and we remember what happened in the past. Seriously they are different designs,and just like an FX8350 had different strengths and weaknesses,or a Phenom II.

I'm not meaning to imply some sort of conspiracy or journalism integrity issue, it's more of a rant about how results are interpreted, and in some cases presented. Some reviews do have very suggestive wording, e.g. AMD did not ‘tell reviews not to use 1080p benchmarks’ as a couple of sites are implying and ranting about it, as I understand it they suggested at least running some higher resolution benchmarks for comparison. I still think it was a faulty move on AMD's part as, of course, this sort of attention was almost guaranteed regardless of their intention. And at the end of the day, it's a guide - it's not binding, and for anyone who doesn't know, review guides are the norm for products like CPUs, GPUs, etc.

The strengths and weaknesses thing is true of any CPU, so I think it's a little unfair to compare it to an 8350 (assuming you meant it that way). It's not nearly the same sort of compromised release as Bulldozer. Like I say there is no single ‘best CPU’ - you have to base judgement on the priorities for a given system. Ryzen can stand alone as a solid processor regardless of price comparisons.

It's like some people are angry and disappointed that Ryzen isn't the fastest CPU across the board, and all for $200. It looks like Zen is a very balanced design and a solid first step on a new series of cores, besting many optimistic predictions and even AMD's own public goals.

I agree with you about Zen2 - Zen had to be finalised at some point to meet deadlines and AMD have IIRC openly stated that some planned features didn't make it into gen1, likely due to time and transistor budget constraints. Having said that, I think they prioritised well with the features they concentrated on. I expect a couple of things may be wider AVX and maybe the clock speed of the on-die CCX (it seems to have the same width as Intel's ring bus but runs at DDR clocks rather than core clocks like Intel's implementation). TBH I can't think of any other obvious areas.
Posted by CAT-THE-FIFTH - Wed 08 Mar 2017 20:39
watercooled
I'm not meaning to imply some sort of conspiracy or journalism integrity issue, it's more of a rant about how results are interpreted, and in some cases presented. Some reviews do have very suggestive wording, e.g. AMD did not ‘tell reviews not to use 1080p benchmarks’ as a couple of sites are implying and ranting about it, as I understand it they suggested at least running some higher resolution benchmarks for comparison. I still think it was a faulty move on AMD's part as, of course, this sort of attention was almost guaranteed regardless of their intention. And at the end of the day, it's a guide - it's not binding, and for anyone who doesn't know, review guides are the norm for products like CPUs, GPUs, etc.

The strengths and weaknesses thing is true of any CPU, so I think it's a little unfair to compare it to an 8350 (assuming you meant it that way). It's not nearly the same sort of compromised release as Bulldozer. Like I say there is no single ‘best CPU’ - you have to base judgement on the priorities for a given system. Ryzen can stand alone as a solid processor regardless of price comparisons.

It's like some people are angry and disappointed that Ryzen isn't the fastest CPU across the board, and all for $200. It looks like Zen is a very balanced design and a solid first step on a new series of cores, besting many optimistic predictions and even AMD's own public goals.

I agree with you about Zen2 - Zen had to be finalised at some point to meet deadlines and AMD have IIRC openly stated that some planned features didn't make it into gen1, likely due to time and transistor budget constraints. Having said that, I think they prioritised well with the features they concentrated on. I expect a couple of things may be wider AVX and maybe the clock speed of the on-die CCX (it seems to have the same width as Intel's ring bus but runs at DDR clocks rather than core clocks like Intel's implementation). TBH I can't think of any other obvious areas.

Yes,but even the FX8350 and Phenom II CPUs had strengths and weaknesses - even AMD got to the same level of IPC as Skylake,there will be situations where AMD would trash Intel and other areas where it would lose. If you listen to what David Kanter had to say about the design in this talk it was quite interesting:
https://www.pcper.com/reviews/Processors/Dissecting-AMD-Zen-Architecture-Interview-David-Kanter

But AMD has done themselves zero favours with this launch - even on OcUK forums there are reports of some motherboard models made by Asus bricking themselves and if you look at my previous post on this page,another set of issues have cropped up.

Now I have noticed people trying to blame the motherboard companies exclusively,even there is noise they only had a few weeks to get final BIOSes out.

This is another part of the problem - if reviewers ended up getting the worst of the boards AMD sent out,they will most likely have encountered problems,as opposed to those who had better boards.

You see people framing the more negative reviews which have worse results as them being against AMD - but the issue again is the launch was sloppy.

I have said this for years - AMD KEEPS doing launches like this. Can you remember the last time a brand new AMD CPU or GPU launch for desktop didn't have an issue,that people like us to explain to somebody that it was fixable?? R9 290X anybody?? Now see how Hawaii is probably one of the most solid GPUs designed in the last few years.

The problem is it really undersells their products big time and why even when AMD has graphics cards which are the equal or better than Nvidia they still sell less cards. They are seen as “problematic” on average than competitors and it means the average Joe or Jane goes for Intel or Nvidia. Intel and Nvidia are also far better at hiding problems too.

Zen isn't another Athlon 64 - its more like a Phenom II of sorts(but with better power consumption though) so maybe more like a Phenom II plus launch. I also do keep hearing some of the stuff on other forums about the death of Intel and I think it is premature. Its why AMD getting Ryzen 2 out in a year is good news. In the past they had tended to drag out the time between releases,which could make them fall back a bit and it means Coffee Lake will be most likely fighting Zen 2 not Zen.
Posted by watercooled - Wed 08 Mar 2017 21:46
CAT-THE-FIFTH
Yes,but even the FX8350 and Phenom II CPUs had strengths and weaknesses - even AMD got to the same level of IPC as Skylake,there will be situations where AMD would trash Intel and other areas where it would lose.
Oh I see what you mean - I suspected I might have been misunderstanding that bit.

CAT-THE-FIFTH
Now I have noticed people trying to blame the motherboard companies exclusively,even there is noise they only had a few weeks to get final BIOSes out.
That's probably just fanboys being fanboys. The way I see it, there are a number of problems that could and should have been ironed out by AMD prior to release.

CAT-THE-FIFTH
Zen isn't another Athlon 64 - its more like a Phenom II of sorts(but with better power consumption though) so maybe more like a Phenom II plus launch.
Yeah that's one way of looking at it. Zen seems more competitive than Phenom II was at launch though.
Posted by CAT-THE-FIFTH - Wed 08 Mar 2017 22:11
watercooled
Yeah that's one way of looking at it. Zen seems more competitive than Phenom II was at launch though.

Well you can't say its an Athlon 64 though - the Athlon 64 beat Intel in almost any workload including gaming with all its teething problems and Intel had nothing competitive outside an emergency edition Pentium 4,and even then the Athlon X2 came along and dished out more punishment. Its not the original Athlon,since again,it dominated almost every Intel CPU at the time,although Intel did have the P3,but they had more issues getting the clockspeed up:

http://www.tomshardware.co.uk/intel-admits-problems-with-pentium-iii-1,review-230.html

The Athlon and Athlon 64 had chipset issues and that was partially down to Intel apparently putting pressure on companies to favour them for motherboards,so when you take that into consideration its even more amazing what they did. Then the Athlon XP also could compete on all levels and could beat Intel too even in gaming and productivity but things were a bit closer at the time,as the P4 could get close towards the end,especially when AMD couldn't push the clockspeeds high enough.

OTH,the XP-M was one heck of a chip if you could get one.

Ok,maybe not a Phenom II Plus,but its probably more between a Phenom II and a Athlon XP.
Posted by mapesdhs - Thu 09 Mar 2017 11:56
CAT-THE-FIFTH
I have a Xeon E3 1230 V2/Core i7 3770 which is locked and stays between 3.5GHZ to 3.7GHZ,so if you had a 5GHZ Core i7 2700K you would be doing far better. …

Ah I see! Yes indeed, an oc'd 2700K is quite a step up, despite IB's IPC advantage.

Note that I do specifically focus on the 2700K. The 2600K is also a good CPU of course for oc'ing, but its potential does vary, hard to be sure what any particular unit can handle. By contrast, every 2700K I've obtained (seven so far) can run at 5GHz with ease, and at a sensible voltage. I'm sure one could push them further (5.2, maybe 5.3?), but I've never tried, 5 is plenty and sits well with the eyeballs on a CPU-Z. :D Here's my system (been using it for about 5 years now):

http://www.overclock.net/lists/display/view/id/2415471

I plan on upgrading to 4K soon though, for which I'll switch it around with my other system, fit a 1080 Ti and likely fit an SM961 to hold game data (then the 2700K will drive the HDTV, for which a 980 is plenty):

http://www.overclock.net/lists/display/view/id/6211115



CAT-THE-FIFTH
… One thing SKL/KL do better is the memory controller - going from 2133MHZ to 3000MHZ RAM can yield upto 20% improvements in performance. …

Very much depends on the task, but yes some definitely benefit, especially in the prosumer space. After Effects benefits from faster RAM (I observed a 10% slowdown in RT3D renders when changing RAM from 2133 to 1866), and of course lots of it. Friend of mine is working on a project atm which easily gobbles 40GB, hence why I reckon he'd be better off waiting for Naples (the 3930K system I built for him already has 64GB).

A lot of games though, based on site reviews, are less affected by RAM speed, oftem even less so by latency. Amuses me when a RAM kit review commentary says one brand “beats” another for a particular game when the difference isn't even a fraction of the margin of error.



CAT-THE-FIFTH
In fact there are one or two hints Ryzen might be that way too - its why I hope in a month or two BIOSes might improve - if that could yield a similar improvement in older games,it might change my perception of things.

Must admit, reading all the forum comments, etc., I'm thinking of leaving my 1700X build for just a little while, perhaps until the end of March, let the BIOS updates roll through somewhat. I'll have enough to do benching a 1080 Ti anyway.



CAT-THE-FIFTH
Its these blasted titles running on older engines which are where I see the CPU being pushed.

That was something which always annoyed me about the way Crysis was lauded so much as a benchmark: nobody bothered to ask whether the way it hammered GPUs was perhaps because the engine just wasn't that well written, no matter how good the visuals looked (and I do love the visuals). The speedups obtained with Crysis Warhead suggested this was indeed the case to some extent. Eventually though, my wanting to play at higher custom detail/res pushed the load onto the GPU, so for me CPU power was less relevant. I was pushing the VRAM limit of two 580 3GB SLI due to the custom mod settings I use, high res shadows, very long draw distances, etc. A single 980 worked better, but still only manages 45fps (reeeally high detail! :D I hate popup in games).

Ian.
Posted by scaryjim - Thu 09 Mar 2017 12:11
watercooled
… The way I see it, there are a number of problems that could and should have been ironed out by AMD prior to release. ….

Whilst there were almost certainly issues that AMD knew about before launch, I think it's worth remembering that some of these issues are things that AMD themselves couldn't necessarily fix.

For instance, they can't rewrite the windows scheduler themselves. Nor can they write fixes to BIOS/UEFI bugs or inconcsistencies. After all, mobo manufacturers have had AM4 specifications for at least 6 months - we've seen OEM AM4 systems at retail last year, and Ryzen ES have been doing the rounds for months. You have to question why those board manufacturers needed to release multiple BIOS updates in the week prior to release…

AMD don't have a huge amount of market share or financial muscle to throw around, sadly. They're a relatively small fish in a relatively big pond, so they're more likely to be dictated to by other companies than vice versa. I'm sure they'll have had hard deadlines for the launch, set for business reasons that we're not privy to (investors are a tricky breed to keep happy ;) ). So it's quite possible that they've been working on these issues for months but have been sandbagged by other companies not giving them high priority.

It could also be the main reason they're holding the Ryzen 5 & 3 launches back - if they knew the Windows 10 scheduler makes Zen's single-threaded performance look bad it would make sense to launch the chips that are meant to be used for monstering multi-threaded tasks first and hold back the lower core-counts for once the scheduler gets fixed…
Posted by mapesdhs - Thu 09 Mar 2017 12:30
CAT-THE-FIFTH
Well you can't say its an Athlon 64 though - the Athlon 64 beat Intel in almost any workload including gaming with all its teething problems and Intel had nothing competitive outside an emergency edition Pentium 4,and even then the Athlon X2 came along and dished out more punishment. …

I found RAM speed was a terrible bottleneck for P4s. I had a Dell 650 with two P4 XEONs, but its DDR266 RAM really held it back, it was stomped by an Athlon64 using DDR400. For 3D tasks, as the res/detail went up, the AMD machine could be as much as six times faster:

http://www.sgidepot.co.uk/misc/mysystemsummary.txt

Back then, the only thing I missed about moving off the Dell 650 was losing the hw U320 SCSI RAID for the C-drive, but of course later SSDs came along to save the day.



scaryjim
… if they knew the Windows 10 scheduler makes Zen's single-threaded performance look bad it would make sense to launch the chips that are meant to be used for monstering multi-threaded tasks first and hold back the lower core-counts for once the scheduler gets fixed…

That's a very good point, sounds distinctly plausible. Surprising that tech site commentaries don't mention this.

Ian.
Posted by scaryjim - Thu 09 Mar 2017 12:54
mapesdhs
… That's a very good point, sounds distinctly plausible. Surprising that tech site commentaries don't mention this. …

Well, it's wild speculation on my part, so it'd be a brave journalist who made it a selling point of an article ;)

OTOH if you look at AMD's launch videos, they talk a lot about prosumer and creative workloads, then say “Oh, but we know you're interested in gaming too”. The focus was definitely on those heavily threaded workloads where Ryzen 7 shines. I think we'll learn more from the timing of the other releases, and in particular the quad core parts that will make up the lower Ryzen 5 and Ryzen 3 CPUs - they're not going to monster multi-threaded tasks anywhere near as well, so AMD really needs to be getting the most out of their single threaded performance. I suspect that means we'll be waiting a while longer for those parts, while scheduler fixed and drivers get released and tested properly…
Posted by kalniel - Thu 09 Mar 2017 13:21
I think it's more that the professional market is worth a lot more in terms of revenue - they know they don't have long to steal market share from Intel so hit with the best return as soon as possible.
Posted by CAT-THE-FIFTH - Thu 09 Mar 2017 13:39
mapesdhs
I found RAM speed was a terrible bottleneck for P4s. I had a Dell 650 with two P4 XEONs, but its DDR266 RAM really held it back, it was stomped by an Athlon64 using DDR400. For 3D tasks, as the res/detail went up, the AMD machine could be as much as six times faster:

http://www.sgidepot.co.uk/misc/mysystemsummary.txt

Back then, the only thing I missed about moving off the Dell 650 was losing the hw U320 SCSI RAID for the C-drive, but of course later SSDs came along to save the day.

Well I had an XP 2800(Barton core) and a P4 2.53GHZ(Northwood B) at the time,and I think the Northwood B was about the closest Intel got to actually trying to reach parity with the Athlon line….until the Athlon 64 came and blew the back doors of the P4 off!! :)

The XP 2800 was more a commiseration purchase though - I was after an XP-M and only OcUK sold them at the time,and stock was like gold dust.


mapesdhs
Ah I see! Yes indeed, an oc'd 2700K is quite a step up, despite IB's IPC advantage.

Note that I do specifically focus on the 2700K. The 2600K is also a good CPU of course for oc'ing, but its potential does vary, hard to be sure what any particular unit can handle. By contrast, every 2700K I've obtained (seven so far) can run at 5GHz with ease, and at a sensible voltage. I'm sure one could push them further (5.2, maybe 5.3?), but I've never tried, 5 is plenty and sits well with the eyeballs on a CPU-Z. :D Here's my system (been using it for about 5 years now):

http://www.overclock.net/lists/display/view/id/2415471

I plan on upgrading to 4K soon though, for which I'll switch it around with my other system, fit a 1080 Ti and likely fit an SM961 to hold game data (then the 2700K will drive the HDTV, for which a 980 is plenty):

http://www.overclock.net/lists/display/view/id/6211115


Very much depends on the task, but yes some definitely benefit, especially in the prosumer space. After Effects benefits from faster RAM (I observed a 10% slowdown in RT3D renders when changing RAM from 2133 to 1866), and of course lots of it. Friend of mine is working on a project atm which easily gobbles 40GB, hence why I reckon he'd be better off waiting for Naples (the 3930K system I built for him already has 64GB).

A lot of games though, based on site reviews, are less affected by RAM speed, oftem even less so by latency. Amuses me when a RAM kit review commentary says one brand “beats” another for a particular game when the difference isn't even a fraction of the margin of error.





Must admit, reading all the forum comments, etc., I'm thinking of leaving my 1700X build for just a little while, perhaps until the end of March, let the BIOS updates roll through somewhat. I'll have enough to do benching a 1080 Ti anyway.





That was something which always annoyed me about the way Crysis was lauded so much as a benchmark: nobody bothered to ask whether the way it hammered GPUs was perhaps because the engine just wasn't that well written, no matter how good the visuals looked (and I do love the visuals). The speedups obtained with Crysis Warhead suggested this was indeed the case to some extent. Eventually though, my wanting to play at higher custom detail/res pushed the load onto the GPU, so for me CPU power was less relevant. I was pushing the VRAM limit of two 580 3GB SLI due to the custom mod settings I use, high res shadows, very long draw distances, etc. A single 980 worked better, but still only manages 45fps (reeeally high detail! :D I hate popup in games).

Ian.

I also use a SFF rig which is partly why I didn't go the K series route and the Xeon E3 series were well under £200 which was a decent saving. Regarding the RAM,certain titles,could actually see a decent uplift from RAM being run at 3000MHZ instead of 2133MHZ of upto 20% with SKL/KL which was surprising,and indicates Intel is probably starting to hit memory bandwidth limitations in some ways too.

Also,Crysis was awesome - even without mods,and with mods looks awesome even now. It was the last time I actually got excited about a game to justify building a new rig for(also it was for work related stuff too).
Posted by watercooled - Thu 09 Mar 2017 17:56
scaryjim
Whilst there were almost certainly issues that AMD knew about before launch, I think it's worth remembering that some of these issues are things that AMD themselves couldn't necessarily fix.
That is true. To an extent you'd hope AMD would have been pushing the likes of MS and their board partners to get this sort of issue resolved before release, but a combination of timing and the points you mention are plausible reasons things haven't quite happened that way.

scaryjim
It could also be the main reason they're holding the Ryzen 5 & 3 launches back - if they knew the Windows 10 scheduler makes Zen's single-threaded performance look bad it would make sense to launch the chips that are meant to be used for monstering multi-threaded tasks first and hold back the lower core-counts for once the scheduler gets fixed…
Yeah that's exactly what I've been thinking - get *something* out the door and put up with some launch-day negativity in order to get this sort of issue ironed out, both for the other CPUs and what are probably more high-volume (on the consumer side) APUs. TBH it's a potentially smart way of forcing re-testing of Ryzen with newer software etc.
Posted by DanceswithUnix - Fri 10 Mar 2017 08:54
watercooled
IMO a very level-headed look at Ryzen, particularly gaming performance, and highlights some of the faulty reasoning for using very low resolutions as ‘forward looking’ benchmarks. TBH it's something I've sort-of noticed in recent benchmarks but not really had it register - Piledriver has begun closing the gap or outperforming the 2500k it was pitted against in gaming benchmarks on its release, and again some places were playing down ‘real-world’ benchmarks (i.e. benchmarking games at settings you'll actually use), claiming that the performance difference would become worse as games became ‘more demanding’ in the future, or as more powerful GPUs became available. That simply hasn't happened. It seems it's not quite that easy to extrapolate future gaming performance.

That was an interesting video, it does amuse me when I see the 8350 doing well in modern benchmarks (I bought mine for simulating payment systems, which I no longer do but it hasn't let me down in my meagre gaming needs enough for replacement so far).

In the past I have benchmarked games at different resolutions and found some strange anomalies in the results. It seems to me that games and drivers are tuned for popular resolutions, which makes a lot of sense. Game writers will target 1080p as monitors are cheap and plentiful, and people at that resolution probably can't afford the latest graphics card and could do with all they help they can get. Popularity in benchmarks then makes the graphics card driver writers target the specific cases as a humanly undetectable 5% frame rate difference can be the difference between winning and losing against competing cards which translates to sales.

Driver writers optimising for popular resolutions helps players at those resolutions, I don't want that effort going into the likes of 640x480, I think that is what annoys me most when seeing low res benchmarks.
Posted by ik9000 - Fri 10 Mar 2017 10:02
DanceswithUnix
Driver writers optimising for popular resolutions helps players at those resolutions, I don't want that effort going into the likes of 640x480, I think that is what annoys me most when seeing low res benchmarks.

but how else will they prepare to release it on mobile in 5 years time? ;)
Posted by CAT-THE-FIFTH - Fri 10 Mar 2017 10:34
ik9000
but how else will they prepare to release it on mobile in 5 years time? ;)

I was looking at some of the old reviews of the Athlon,Athlon XP,Athlon 64 and Athlon X2 and there were plenty of silly low resolution and normal resolution tests - it didn't matter what resolution 9/10 they won and it was fine back then and nobody was moaning! ;)

We need to be careful getting too much into reading social media,everybody - soon 1080P won't enough as we are now hearing rumblings that is too low even though that is the most common resolution. At this rate everybody will be testing at 5K by next week! :p
Posted by DanceswithUnix - Fri 10 Mar 2017 11:10
ik9000
but how else will they prepare to release it on mobile in 5 years time? ;)

I thought even budget phones were 1080p these days, I shudder to think what they will be doing in 5 years.

CAT-THE-FIFTH
I was looking at some of the old reviews of the Athlon,Athlon XP,Athlon 64 and Athlon X2 and there were plenty of silly low resolution and normal resolution tests - it didn't matter what resolution 9/10 they won and it was fine back then and nobody was moaning! ;)

We need to be careful getting too much into reading social media,everybody - soon 1080P won't enough as we are now hearing rumblings that is too low even though that is the most common resolution. At this rate everybody will be testing at 5K by next week! :p

At least back in the fixed function graphics card days they were for quite a while bottlenecked on simple pixel shader performance, so if you took the graphs for how a game scaled over different resolutions and turned into megapixels eg “640x480” into 0.3, 1024x768 -> 0.79 then you got a lovely straight slope on the graph where it was GPU limited. Surprised I never saw that in reviews at the time.

So I think back then it kind of looked valid or at least was of some interest. These days shader programs are far too complex for such simplicities.
Posted by CAT-THE-FIFTH - Fri 10 Mar 2017 11:31
DanceswithUnix
I thought even budget phones were 1080p these days, I shudder to think what they will be doing in 5 years.



At least back in the fixed function graphics card days they were for quite a while bottlenecked on simple pixel shader performance, so if you took the graphs for how a game scaled over different resolutions and turned into megapixels eg “640x480” into 0.3, 1024x768 -> 0.79 then you got a lovely straight slope on the graph where it was GPU limited. Surprised I never saw that in reviews at the time.

So I think back then it kind of looked valid or at least was of some interest. These days shader programs are far too complex for such simplicities.

But the issue,is even AMD really does not want fully GPU limited benchmarks either - some games will show bottlenecks at even qHD(the ones based on older engines),but a lot probably will be mostly GPU limited and it runs the risk,if that say you use RX470 at 1080P instead of a GTX1080,a gamer will see that their old Core i5 4570 is hitting similar framerates(say within a few percent) to the new Ryzen 2 2800X in 7/10 or 8/10 titles at 1080P ,which means they will think AMD sucks since their latest £500 behemoth can't even convincingly beat their £500 PC.
Posted by scaryjim - Fri 10 Mar 2017 12:04
DanceswithUnix
I thought even budget phones were 1080p these days…

I think there are one or two out there that make do with 720p, but they're getting fewer and further between…!


I was going to write a lengthier comment about the difference in software ecosystems between now and 2002, but tbh while the point I was going to make (there are more games now and their performance relative to each other is a lot closer) is definitely valid, there are enough exceptions that someone's bound to try to argue me down.

So I'll leave it at: I think 1080p results with a top-end GPU today are probably less relevant than 1024x768 with a top end GPU back in 2003, simply because back then game engines were catching up to GPU technology rapidly, whereas today they're much more closely aligned to start with. And I'd question the relevance of both of them to the majority of gamers anyway…

EDIT for crosspost:

CAT-THE-FIFTH
… they will think AMD sucks since their latest £500 behemoth can't even convincingly beat their £500 PC.

Is that really a useful argument though, CAT? Would someone who last time spent £500 on their entire PC really be reading a review of a £500 CPU thinking “I wonder if this is a sensible upgrade to my PC?”. If they're looking at a £500 CPU that would suggest that either their budget or their priorities have changed. If it's the budget, then they'll probably be looking at a new GPU and monitor anyway. If it's their priorities, then presumably they'll be looking to improve their performance in other workloads without reducing their gaming capability, in which case AMD not beating their i5 won't be an issue.

There are lots of interesing things you can look at in a review. However, many of them have only questionable relevance. To me, it makes no sense to test a £500 CPU paired with with a £600 GPU as if it were attached to a mainstream < £250 monitor.

Similarly, it will make no sense to me with the < £300 R5 chips get tested with a GTX 1080 or 1080 Ti. I'd hope we'll see those reviewed in sensible mid-range build with a 1060 or a 480. But only time will tell…
Posted by CAT-THE-FIFTH - Fri 10 Mar 2017 12:41
scaryjim
I

Is that really a useful argument though, CAT? Would someone who last time spent £500 on their entire PC really be reading a review of a £500 CPU thinking “I wonder if this is a sensible upgrade to my PC?”. If they're looking at a £500 CPU that would suggest that either their budget or their priorities have changed. If it's the budget, then they'll probably be looking at a new GPU and monitor anyway. If it's their priorities, then presumably they'll be looking to improve their performance in other workloads without reducing their gaming capability, in which case AMD not beating their i5 won't be an issue.

There are lots of interesing things you can look at in a review. However, many of them have only questionable relevance. To me, it makes no sense to test a £500 CPU paired with with a £600 GPU as if it were attached to a mainstream < £250 monitor.

Similarly, it will make no sense to me with the < £300 R5 chips get tested with a GTX 1080 or 1080 Ti. I'd hope we'll see those reviewed in sensible mid-range build with a 1060 or a 480. But only time will tell…

You mean like all the people buying Nvidia cards,since the top end model beats AMD,even though AMD does better at cheaper price-points??

It does not change the fact - YOU are pushing for 100% GPU limited testing and don't like the fact that I said it runs the risk of someone seeing their older CPU still doing well against a new AMD chip.

You really need to stop reading into all this stuff on social media - even with the cheap chips,you run the risk of someone with that same CPU I mentioned looking at 6C/12T R5 at £230 seeing even less of an improvement and still thinking AMD sucks.

Plus,you really need to stop pulling numbers out of the air.

My CPU cost £175 - you will soon see a GTX1080 can be pushed and there are also enthusiasts targeting 1080P gaming at high framerates on 120HZ/144HZ even on forums now having very powerful cards.

I know you are trying to push the argument that Ryzen=Intel in gaming,as even kalniel pointed out you run the risk of making slightly older Intel CPUs look as perfectly good as what AMD has out now.

Plus as I mentioned before,where were forums moaning that the Athlon,Athlon XP and Athlon 64 were being tested at rubbish low resolution and winning over the Pentium 4. I knew nobody running games at 640X480 on a Athlon XP or Northwood P4 2.53GHZ - I bought both FFS.

Edit!!

Plus as you will also see even a GTX1080 can be pushed at qHD in some games at max settings.

There are people you target 120HZ/144HZ gaming - and you are not going to get them in a number of titles at qHD even though my monitor cost only £250ish.

For that you would need a 1080p panel and I have seen on forums people have monster set-ups since they want high framerates.

I personally don't worry so much about dips,but I have a much bigger tolerance of such things than many people since I care less. But people do care,especially for certain online games too but also single player ones.

Second Edit!!

Plus this too - GTX1080 level performance will probably drop to sub £400 over the next year or so anyway or even less. GTX980 cards started at well over £400 and eventually were as cheap as £250ish.

So you could even see that level of performance for under £300 if Vega is very good and given another six to nine months.

Plenty of people will wait to run more intensive titles anyway - so its not beyond the realms of possibility of someone at 1080P to have GTX1080 level performance anyway,and they might be running older titles anyway which they could not run before and are graphically intense.

Even for many years I only had a 1680X1050 monitor and I had cards designed for 1080P resolution for a reasonable amount of that time.

Even before that I used to run games at lower than average resolution with a better than average card.

Why?

I wanted to max settings whilst having consistent graphics performance.

I have done that for years.
Posted by scaryjim - Fri 10 Mar 2017 13:37
CAT-THE-FIFTH
… YOU are pushing for 100% GPU limited testing and don't like the fact that I said it runs the risk of someone seeing their older CPU still doing well against a new AMD chip. …

Neither of those statements are true.

I've never said they shouldn't test at 1080p. I've said that 1080p gaming with a GTX 1080 isn't reflective of what the vast majority of gamers will experience. And I've only said that because you have been continually making a big issue out of the 1080p gaming tests like they're the only thing that matters (Seriously, review how many posts you've made about those tests vs posts about every other test). For real world relevance, tests at 1440p are far more likely to reflect the experience most gamers will receive. The 1080p tests in those circumstances are more-or-less synthetic benchmarks - they have the same relevance as PiFast and AIDA64 in that they give an indication of overall and comparitive performance but they're not real-world scenarios.

And I think someone who looks at a Ryzen 7 reviews and decides that because it's no faster in games it's no better than their 3-gen old i5 is probably right for their purposes. Read what I actually said. If all they care about is games and they're using a computer that cost £500 in total, a £500 Ryzen 7 makes no sense for them.

I don't actually want reviewers to test a £500 CPU with a £200 GPU - that makes no sense. But similarly I don't think testing at 1080p makes that much sense if they want to review something that's representative of what a consumer might buy. GTX 1080 reviews repeatedly say “this GPU is overkill for 1080p, if you're going to play at 1080p you may as well buy something cheaper”. But suddenly for CPU reviews everyone's testing a GTX 1080 at 1080p? *shrug* seems … inconsistent…
Posted by ik9000 - Fri 10 Mar 2017 14:05
scaryjim
Neither of those statements are true.

I've never said they shouldn't test at 1080p. I've said that 1080p gaming with a GTX 1080 isn't reflective of what the vast majority of gamers will experience. And I've only said that because you have been continually making a big issue out of the 1080p gaming tests like they're the only thing that matters (Seriously, review how many posts you've made about those tests vs posts about every other test). For real world relevance, tests at 1440p are far more likely to reflect the experience most gamers will receive. The 1080p tests in those circumstances are more-or-less synthetic benchmarks - they have the same relevance as PiFast and AIDA64 in that they give an indication of overall and comparitive performance but they're not real-world scenarios.

And I think someone who looks at a Ryzen 7 reviews and decides that because it's no faster in games it's no better than their 3-gen old i5 is probably right for their purposes. Read what I actually said. If all they care about is games and they're using a computer that cost £500 in total, a £500 Ryzen 7 makes no sense for them.

I don't actually want reviewers to test a £500 CPU with a £200 GPU - that makes no sense. But similarly I don't think testing at 1080p makes that much sense if they want to review something that's representative of what a consumer might buy. GTX 1080 reviews repeatedly say “this GPU is overkill for 1080p, if you're going to play at 1080p you may as well buy something cheaper”. But suddenly for CPU reviews everyone's testing a GTX 1080 at 1080p? *shrug* seems … inconsistent…

some of us aren't made of money.. YOu buy the good card, keep the eixsting monitor. After a while you've saved enough for the new monitor. That while = years in my case. So when i upgrade my GPU it probably will be capable of more than 1080p, but necessity will keep me there for the time being.
Posted by scaryjim - Fri 10 Mar 2017 14:42
ik9000
… when i upgrade my GPU it probably will be capable of more than 1080p, but necessity will keep me there for the time being.

If you don't have the money to upgrade to a > 1080p (or > 60Hz) monitor how likely are you to be reading reviews for £500 CPUs with a view to buying them? How likely are you to be looking at a CPU upgrade targeted at gaming at all?

If you're upgrading a GPU now but keeping you're old monitor how likely is it that you've got a monitor with a refresh higher than 75Hz? You're most likely to be on 60Hz, so you literally won't be able to see the difference between 90fps and 110fps.

This is my point - most people will have a cheaper GPU or a higher resolution monitor. Of those who do have a GTX 1080 and a 1080p monitor, many will only have a 60Hz monitor anyway. The only people to whom the 1080p results are real world relevant are the ones who have both a GTX 1080 and a high refresh 1080p monitor. Those people do exist, but they're an extreme minority. ..
Posted by CAT-THE-FIFTH - Fri 10 Mar 2017 14:53
scaryjim
Neither of those statements are true.

I've never said they shouldn't test at 1080p. I've said that 1080p gaming with a GTX 1080 isn't reflective of what the vast majority of gamers will experience. And I've only said that because you have been continually making a big issue out of the 1080p gaming tests like they're the only thing that matters (Seriously, review how many posts you've made about those tests vs posts about every other test). For real world relevance, tests at 1440p are far more likely to reflect the experience most gamers will receive. The 1080p tests in those circumstances are more-or-less synthetic benchmarks - they have the same relevance as PiFast and AIDA64 in that they give an indication of overall and comparitive performance but they're not real-world scenarios.

And I think someone who looks at a Ryzen 7 reviews and decides that because it's no faster in games it's no better than their 3-gen old i5 is probably right for their purposes. Read what I actually said. If all they care about is games and they're using a computer that cost £500 in total, a £500 Ryzen 7 makes no sense for them.

I don't actually want reviewers to test a £500 CPU with a £200 GPU - that makes no sense. But similarly I don't think testing at 1080p makes that much sense if they want to review something that's representative of what a consumer might buy. GTX 1080 reviews repeatedly say “this GPU is overkill for 1080p, if you're going to play at 1080p you may as well buy something cheaper”. But suddenly for CPU reviews everyone's testing a GTX 1080 at 1080p? *shrug* seems … inconsistent…

YOUR the one pushing for them to test slower cards at those resolution which supports the whole notion of making all the tests making them GPU limited.

I am trying to explain to you why its still important to do such testing with a faster card - plus please don't say its something new.

Look at the Hexus review of the FX8350:

http://hexus.net/tech/reviews/cpu/46985-amd-fx-8350/?page=5

The GTX680 and HD7970 were the fastest cards at the time - there was no Geforce Titan out yet. Hexus used a GTX680 and tested at 1080P.

That is a £400 card. You could get qHD monitors back then too.

They tested at 1080P - its been happening for years. I don't see why all of a sudden its invalid just for this launch.

Even at 1080P with a GTX680 some games can be GPU limited even then and the same applies to 1080P testing with a GTX1080 even today.

The only reason why there is this social media moaning is because AMD losses - if AMD suddenly catches up with Windows patches and games patches to Intel at 1080P using a GTX1080,then not a single person on social media would be saying anything.

Plus Hexus didn't even use a Titan X at 1080P in their Ryzen 7 tests - they actually could have made AMD look worse if they wanted.

Its no point then testing any of the CPUs for gaming,and the problem is the few games which show AMD doing worse at those resolutions even with a slower cards,will be the ones which favour Intel because they favour single core performance over multi-threading,so in the end it will actually be AMD that does worse out of it instead of Intel.

I found I was CPU limited mostly in titles which push high single core performance,not games which are reasonably well multi-threaded. Making the tests GPU limited will make AMD look worse - not Intel,because it will be those games with crappy engines which will hammer AMD the worst.

Its the same games where SKL/KB seem the biggest improvement in actual real world performance over even the older Intel CPUs.

You still need to understand people doing are also 120HZ/144HZ gaming at 1080P - look on forums and see people having high end 1080P 120HZ/144HZ gaming monitors with expensive GPUs.

In the end you need to realise people will have faster than necessary cards at 1080p,and its still important to test - you need consider a GTX1080 is not the fastest card out there. Its the Titan X and GTX1080TI cards,and then when you include SLI/X fire setups its even worse.

Its probably three to five tiers down the line.Even GTX1070 OC models are not always massively slower than a stock GTX1080 in a number of scenarios.

Now apparently Nvidia is refreshing their entire range of GTX1060,GTX1070 and GTX1080 cards with faster RAM,and if AMD prices Vega well,no doubt prices will drop.


I for many years overspecced my cards for my resolution which typical was not even the native resolution of my monitor. Plenty of times I would not even run those 1080P cards at 1680X1050,I would run them at 1440X900 so I could whack up settings and be less GPU limited.

You also need to consider testing in games is done for reproducibility not always testing the worst areas,especially with so many more games pushing randomisation of various metrics,like NPC spawns,etc.

Plus that level of performance will start to get cheaper and cheaper anyway.

I am sorry but we will need to agree to disagree with this - we have diametric viewpoints on this and we are not going to change the way we think about this matter by arguing anymore.
Posted by ik9000 - Fri 10 Mar 2017 15:27
scaryjim
If you don't have the money to upgrade to a > 1080p (or > 60Hz) monitor how likely are you to be reading reviews for £500 CPUs with a view to buying them? How likely are you to be looking at a CPU upgrade targeted at gaming at all?

If you're upgrading a GPU now but keeping you're old monitor how likely is it that you've got a monitor with a refresh higher than 75Hz? You're most likely to be on 60Hz, so you literally won't be able to see the difference between 90fps and 110fps.

This is my point - most people will have a cheaper GPU or a higher resolution monitor. Of those who do have a GTX 1080 and a 1080p monitor, many will only have a 60Hz monitor anyway. The only people to whom the 1080p results are real world relevant are the ones who have both a GTX 1080 and a high refresh 1080p monitor. Those people do exist, but they're an extreme minority. ..

Say I have a pot of money I've built up over a few years to spend. I go for the parts that will most make a difference. I've wanted to upgrade CPU for ages, but none for my MOBO now available. Plus I'm hitting limits of only having 2 sataIII ports. So it's 1) a new CPU and MOBO 2) new GPU then with money left (or after saving) then and only then will it be monitor. Somewhere in that I'll work out if it's worth getting an NVMe drive, and where that fits into the scheme of things. Anyway my current two monitors are fine for now. Though I can't wait to get something without PWM.

ps have I missed something? Who mentioned anything about more than 60Hz? My monitor can go higher, up to 85Hz I think, but meh, and it's native is 60Hz IIRC.
Posted by Ozaron - Fri 10 Mar 2017 15:42
Hey, I know your debate (not argument, AFAIK neither of you have been crude about this…yet) is more about how testing should be done rather than the results themselves, but I thought you might find this relevant.



100% is parity between 1800X and 6900K. Colour is GTX 1080Ti, grey is GTX 1070. In case you want it, here's the link.

I have a feeling that the topic you're debating over has no real “answer” to it, some people will want a result that feels tangible to real users at the cost of having consistency across all games due to GPU limitation, and some people will want as much GPU power available in the test to see how the CPU responds as the only variable. The fact that results in the real world, and results obtained via scientific method, are different - kind of nullifies the value of either of them, IMO. If a GTX 1080 at 1080p is the compromise / midway point then sure, 1080p IS still a popular resolution if we go by Steam stats (I guess nowhere else knows anything?) and the GTX 1080 isn't unpopular as a GPU, but the combination is unlikely to constitute more than 1% of Steam's userbase, so, not that great of a realistic setup if you ask me. Not that I have a better alternative.

Excuse me, carry on.
Posted by CAT-THE-FIFTH - Fri 10 Mar 2017 15:47
My prediction is once Ryzen gets some Windows and games patches,and better use of high speed memory with better motherboards,they will look better at 1080P,and all of a sudden all the moaning on social media about how the media is trying to screw poor AMD over,will vanish,especially if AMD beats Intel,and nobody will care if reviewers use quad GTX1080TI cards.

Like I said in the Athlon,Athlon XP and Athlon 64 days,despite the Wintel alliance being at is peak,AMD having issues with motherboards,and with not enough companies making AMD chipsets,etc AMD still trashed Intel in almost everything and nobody cared.

Now everyone is so scared AMD won't do well enough with Ryzen and it will lead to another 5 years of Intel screwing over enthusiasts they want to make sure every test is tilted towards AMD. AMD chose to launch Ryzen in its current state,not anyone else so instead of social media trying to force reviewers to sing to their tune,they maybe need to ask why AMD launched it the way it is??

Plus it makes me wonder,at this concerted effort at social media to attack reviewers,or even that rumour about Intel trying to warp reviews before the launch(which I thought might be true since I got caught up in the PR) - these don't help Intel,they help AMD more,to hide the fact that gaming benchmarks are not ALWAYS 100% in their favour despite many reviewers saying Ryzen is fine,but Intel has the edge in certain scenarios.

But OFC the scenarios where AMD is doing better than Intel in gaming is totally not to be questoned!! :rolleyes:

I got to hand it to AMD,I say their marketing has gotten better - shame they still can't launch things to save their life,but I still live in hope.

LOL,I feel like I am some Intel fanboi now - I never thought I would be actually trying to defend Intel. Jeez!!

Their stupid market segmentation alone makes me want to write a whole diatribe about it.
Posted by mapesdhs - Fri 10 Mar 2017 21:43
scaryjim
Well, it's wild speculation on my part, so it'd be a brave journalist who made it a selling point of an article

I don't know about a selling point, but certainly a theory worthy of mention IMO.


scaryjim
OTOH if you look at AMD's launch videos, they talk a lot about prosumer and creative workloads, then say "Oh, but we know you're

That's why I was surprised to see so few PCIe slots on the available AM4 boards, and a rather low max RAM of 64GB. Is this really so appealing to prosumers? Some perhaps I guess (especially tasks that don't benefit from multiple GPUs), but for many it's not that attracive, After Effects being but one example. The 8 cores are good of course, but not being able to scale the RAM to match (beyond what X79 already offers), or fit multiple PCIe devices means in many cases this isn't a platform I'd recommend for a powerhouse build, though it's certainly pretty good as a single point stepping stone to something more potent, given Intel's pricing, but perhaps not for someone who already has a decent X79 setup (hopefully Naples will be fit that bill).


scaryjim
well, so AMD really needs to be getting the most out of their single threaded performance. I suspect that means we'll be waiting a while

I just hope that, aside from any IPC tweaks they can do in the meantime, they can get the clocks up aswell. The per-core thermal headroom won't be an issue, just down to whether the design can handle it.



kalniel
I think it's more that the professional market is worth a lot more in terms of revenue - they know they don't have long to steal market share from Intel so hit with the best return as soon as possible.

Occurs to me that even though this platform isn't suitable for all pro markets, if AMD can pull in just a few then that on its own could be a good PR springboard which would serve them well for when Naples arrives.



CAT-THE-FIFTH
Well I had an XP 2800(Barton core) …

Hehe, me too! It was a system I built for my brother; then I changed it to a 3400+. Meanwhile, my own initial system went from the dual-P4 Dell to a 6000+ on a low-cost but surprisingly good AM2 board (Asrock M2NF3-VSTA).


CAT-THE-FIFTH
Athlon 64 came and blew the back doors of the P4 off!!

Very overclockable too. I set my brother's 3400+ to 2.7GHz.


CAT-THE-FIFTH
Regarding the RAM,certain titles,could actually see a decent uplift from RAM being run at 3000MHZ instead of 2133MHZ

Quite a jump in overall RAM clock, so yes I can imagine that would be the case for some games, etc.

I remember way back, tech sites would occasionally do a round-up review of the current most popular games, show how their performance varied depending on RAM speed, PCIe link speed, etc. Of course some of this is dependent on CPU/GPU issues, but even so, it was interesting to note which titles were not much affected by slower RAM or narrower PCIe link widths, vs. others that slowed down a lot (seem to recall FSX was particularly badly hit by running over a slower PCIe link, something about a badly coded engine which constantly reloads texture data when it really shouldn't need to).


CAT-THE-FIFTH
Also,Crysis was awesome - even without mods,and with mods looks awesome even now. …

I'm actually playing through it again atm. 8)


CAT-THE-FIFTH
It was the last time I actually got excited about a game to justify building a new rig for(also it was for work related stuff too).

When it first came out I was put off for the longest while just because the hype for it was so OTT. When I finally ended up trying it though (DVD was only 3 UKP used off ebay), I was so totally and completely hooked (only started as a way of showing my brother my new 980 upgrade), I played the entire game right the way through in a single sessions, about 18 hours. :D


The hype, I rapidly conceded, was justified. :D:D

The only game that's motivated me to but any peripheral extras though is Elite Dangerous, for which I bought a throttle, stick, gamepad, custom surround setup:

http://www.sgidepot.co.uk/misc/P1020773s.jpg

and a decent chair:

http://www.sgidepot.co.uk/misc/P1070960s.jpg

:D

It's why I'm interested in the 1080 Ti, move up to UHD but still be able to keep it above 60Hz with high/max details (though perhaps with AA turned off, might not need it such a high pixel density). My 980 can't do this atm (45fps at 1920x1200) but that's partly because of a VRAM limit due to SMAA (IIRC), something which may go away if I find I don't need to use so much AA (which would then free up the performance). We shall see! Not decided on which monitor yet though.



DanceswithUnix
It seems to me that games and drivers are tuned for popular resolutions,

Reminds me of the way Fury X didn't look so great at 1080p, but then a lot better at 4K. AMD said they were aiming it at 4K (despite the weirdly low RAM capacity) and at least for a while the results bore this out, until newer NV tech pushed it aside.



CAT-THE-FIFTH
We need to be careful getting too much into reading social media,everybody

Hmm, good point, but then everyone imposes their own pressures of what they want tested for whatever reason. This is why I spend time testing newer GPUs with older CPUs (and vice versa), because most tech sites don't bother evem though it's perhaps a far more common question, people with AM2/S775 setups asking about modern GPUs. people contemplating a platform upgrade but keeping the GPU.

Ian.
Posted by watercooled - Sat 11 Mar 2017 18:06
Hmm, well hopefully a HEXUS reveiwer can answer this, but it seems something is wrong with Ryzen's temperature monitoring. I've just finished setting up my 1700X system, all OK so far, apart from the bizarre temp readings - at idle, software (and BIOS) are reporting about 60C idle which I do not trust at all. I can literally touch the edge of the CPU heatspreader and it's not even warm.

If it were just a number, I could ignore it - the problem is that the fan speed control relies on this temperature, so unless I set it to manual, the fan runs flat out almost permanently.

The load temps are also weird - even under super-heavy load, it barely touches 80C, just 20C over idle. I mean, that temperature delta could be fine for a soldered CPU but it still seems to be offset from the actual temperature. I can feel the warm heatspreader and heatpipes under load but if anything it seems cooler than my 1055T, but again that's not unusual as they're both similar TDP (95W) CPUs.

IMO this rules out the possibility of a faulty thermal interface between die and IHS as it seems to be fine under load. A 40W idle temperature delta (from ambient) would pretty much imply the heatspreader was floating above the die.

And looking around the web this seems to be the exact same thing a number of people are experiencing, I wonder if it's BIOS-related? I'm using the F2 BIOS on a Gigabyte AB350M-Gaming 3
Posted by CAT-THE-FIFTH - Sat 11 Mar 2017 18:12
Gigabyte seems to be a bit slow on the BIOS updates - looking at some other threads,MSI and ASRock seem to be pushing out more releases. The F2 release is dated before the release of Ryzen itself,so I would probably contact Gigabyte and ask them if a new BIOS version is being released soon.
Posted by watercooled - Sat 11 Mar 2017 18:24
Yep, just finished sending a ticket to them. I'm reassured by the other similar reports on forums as I panicked when I initially saw the number. I just wish I knew the *real* load temperature so I could set the maximum fan speed accordingly.

Does it seem to be a problem specific to Gigabyte?
Posted by CAT-THE-FIFTH - Sat 11 Mar 2017 18:50
watercooled
Yep, just finished sending a ticket to them. I'm reassured by the other similar reports on forums as I panicked when I initially saw the number. I just wish I knew the *real* load temperature so I could set the maximum fan speed accordingly.

Does it seem to be a problem specific to Gigabyte?

I did hear reports of people reporting 2.2GHZ idle clockspeeds,so what are your idle clockspeeds?? If it is that high,it might be that??
Posted by watercooled - Sat 11 Mar 2017 18:54
Yeah my idle clocks are also fairly high, around 2.11 GHz. Do you know any more about this/resolutions?

I still don't think that explains the temperatures though, I just don't think they're accurate. Perhaps they need correcting with an offset.
Posted by CAT-THE-FIFTH - Sat 11 Mar 2017 19:00
watercooled
Yeah my idle clocks are also fairly high, around 2.11 GHz. Do you know any more about this/resolutions?

I still don't think that explains the temperatures though, I just don't think they're accurate. Perhaps they need correcting with an offset.

It will contribute to it though which is not helpful,although you are not alone:

https://forums.overclockers.co.uk/threads/i-actually-had-a-faulty-ryzen-1700x-i-think-50-to-60c-idle-temp.18771900/

Somebody else on OcUK forums had a similar problem too.
Posted by watercooled - Sat 11 Mar 2017 19:04
Do you know what the normal idle clocks are supposed to be? I just assumed it might have high-ish idle clocks due to clock gating.
Posted by CAT-THE-FIFTH - Sat 11 Mar 2017 19:09
watercooled
Do you know what the normal idle clocks are supposed to be? I just assumed it might have high-ish idle clocks due to clock gating.

Not sure ATM - but its all a bit hit and miss.

TBH,as long as the CPU is not exceeding its safe temperature I probably would not worry too much.

I still remember a mate who had their Athlon II X3 unlocked to a Phenom II X4 with 6MB of L3 cache,didn't bother to use a better heatsink than the Athlon II stock cooler for some reason,and it was running at a stupidly high temperature for years!! :p
Posted by watercooled - Sat 11 Mar 2017 19:11
Ughh. Early adopter issues I guess…

Well hopefully it gets resolved sooner than later! I'll keep you posted.
Posted by watercooled - Sat 11 Mar 2017 19:18
Just found this screenshot of AMD's own OC tool: http://core0.staticworld.net/images/article/2017/03/ryzen-master-5-100712478-orig.jpg

The temps and idle clock speeds are about the same there, too. I might drop AMD an email too, see what they have to say about it.
Posted by DanceswithUnix - Sun 12 Mar 2017 09:18
The rather old 65W quad core APU in my home server idles at 1.7GHz, so a better architecture on a more modern process 2.1GHz would be less than a 25% increase and sounds reasonable.
Posted by watercooled - Sun 12 Mar 2017 10:32
Yeah that part didn't really concern me, what with power gating etc. I just wonder if it's normal for that processor.

The temperature readings are so far off I wonder why no reviewers sanity-checked them? Some setups seem to show sensible readings (like Hexus) but how can an experienced reviewers see 60C at idle for a soldered die and think everything is correct, then complain about it being ‘hot-running’ due to it increasing by 20C under load. TBH I suspect even the lower values are off, and AMD's reported CPU values are inaccurate yet again.

Another thing, which TBH isn't than unusual, is the vcore reported by HWmonitor is insanely high, like over 2v (again this is not logical as wall power consumption is fine). And fan speed doesn't match what is reported in BIOS.

Come on Gigabyte - waiting on you now!
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 11:06
DanceswithUnix
The rather old 65W quad core APU in my home server idles at 1.7GHz, so a better architecture on a more modern process 2.1GHz would be less than a 25% increase and sounds reasonable.

The R7 1700 idles at 1.5GHZ:

http://www.legitreviews.com/amd-ryzen-7-1700-overclocking-best-ryzen-processor_192191/10




You need to consider the R7 1800X is consuming more power at idle than a Core i7 6900K alone since it is an SOC(the Core i7 6900k platform power is also being shared with a larger chip for the motherboard functionality) and also the chip is nearly 15% smaller too,so the cooler has to dissipate more heat anyway over a smaller area.

The FX CPUs are over 50% larger than a Ryzen chip and also have a more traditional platform layout.
Posted by watercooled - Sun 12 Mar 2017 11:27
You're not talking anywhere close to the sort of power consumption/density that would cause a CPU to reach 40C over ambient though - the readings are just plain wrong, unless the IHS TIM is faulty/cracked (which I doubt because the CPU survives without throttling under heavy load).

My 1055T, also with a soldered die, 95W TDP, *no* clock gating, much higher idle power consumption at the 8pin, in the same case, with the same cooler, didn't even hit 60C *under load*. Even at idle I could feel the 1055T was lukewarm, the 1700X is just cold.
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 11:36
watercooled
You're not talking anywhere close to the sort of power consumption/density that would cause a CPU to reach 40C over ambient though - the readings are just plain wrong, unless the IHS TIM is faulty/cracked (which I doubt because the CPU survives without throttling under heavy load).

My 1055T, also with a soldered die, 95W TDP, *no* clock gating, much higher idle power consumption at the 8pin, in the same case, with the same cooler, didn't even hit 60C *under load*. Even at idle I could feel the 1055T was lukewarm, the 1700X is just cold.

The Phenom II X6 die was like 300MM2 too,so again its much smaller on Ryzen and you have static chipset functions too,onboard the Ryzen CPU. Remember the Phenom II X6 figures are shared with a much more functional chipset which consumes a larger percentage of the idle power.

Ryzen is like 190ish MM2 in size,so even if the motherboard is not reporting the things right,Ryzen is still going to probably run hotter with similar cooling and the Ryzen stock coolers seem to have a larger fan than the older AMD ones.

You also need to remember,that link I posted yesterday - someone with the R7 1700X with a better than stock cooler had the same issues as you.

They didn't with the R7 1700.

Their R7 1700X was significantly hotter running than their R7 1700 sample too.

It was on an MSI motherboard.

So,my viewpoint is not to worry as I said before,unless the CPU is going past its safe temperature.
Posted by DanceswithUnix - Sun 12 Mar 2017 12:04
CAT-THE-FIFTH
You need to consider the R7 1800X is consuming more power at idle than a Core i7 6900K alone since it is an SOC(the Core i7 6900k platform power is also being shared with a larger chip for the motherboard functionality) and also the chip is nearly 15% smaller too,so the cooler has to dissipate more heat anyway over a smaller area.

Those power figures must be from the wall system power, so SOC issues wash out. You are thinking too hard on this one.

Look at the AM4 motherboard used in that review, just the LEDs on it are probably using 5W. Each chip on that board requires power, and £260 buys you a lot of chips.

I would love to see what the idle power is like on an A300 board, and I would also like to see AMD avoid shooting themselves in the foot with their motherboard choice as AIUI they are the ones sending boards out for review and the most sparkly one is not necessarily the best, unless you are 12 yrs old or in marketing :D
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 12:19
DanceswithUnix
Those power figures must be from the wall system power, so SOC issues wash out. You are thinking too hard on this one.

Look at the AM4 motherboard used in that review, just the LEDs on it are probably using 5W. Each chip on that board requires power, and £260 buys you a lot of chips.

I would love to see what the idle power is like on an A300 board, and I would also like to see AMD avoid shooting themselves in the foot with their motherboard choice as AIUI they are the ones sending boards out for review and the most sparkly one is not necessarily the best, unless you are 12 yrs old or in marketing :D

Why would it wash out - the chip in an I7 6900K is 230ish MM2 - the R7 series is around 190mm2,is around 20% larger and the X99 motherboards have a much larger chipset too. You know very well that the AMD AM4 platform chipset really does not do that much. So when you have the Intel CPU being 20% larger and when 99% of the chipset functionality is under the same cooler,unlike with X99 OFC it is going to run hotter.

A lower percentage of the idle and load power is dissipated from the CPU in the X99 platform.

What is the likelihood if Intel push Broadwell D onto the desktop and push up the clockspeeds it would start to show the same issues?

I am not sure why you are trying to hide this??

The R7 1700 also seems to be idling at closer to 1.5GHZ instead of 2.2GHZ too.

Plus you also need to get that GF 14NM is way past its optimal performance/watt once you go past 3GHZ - look on OcUK. Somebody with both an R7 1700 and R7 1700X in the same MSI motherboard with both the R7 1700 stock cooler and an H110i,noted that their R7 1700X sample was significantly hotter at idle.

Even if the motherboards are over-reporting stuff,that is two different motherboards with the R7 1700X showing the same thing now.

The R7 1700X and R7 1800X are closer to the edge of 4.0GHZ/4.1GHZ than the R7 1700,so I am not surprised that it will be worse than a R7 1700.

Edit!!

The Phenom II X6 had a 346MM2 die size and the FX8350 a 315MM2 die size - compare that to a 190ish MM2 die size for a Ryzen R7,and that is with 90% of the chipset functionality onboard.

Plus people get too worried about temperatures - my IB Core i7 with a low profile cooler used to run very hot too,and even with an AIO water cooler, idle is not very low - it can be easily in the early 40 degree range,and that is a 69W TDP chip with an OTT cooler and the IGP disabled at the factory.

My Q6600 overclocked in a Shuttle was not that great either and that was on a 975X chipset and those ran hot.
Posted by watercooled - Sun 12 Mar 2017 12:33
CAT-THE-FIFTH
So,my viewpoint is not to worry as I said before,unless the CPU is going past its safe temperature.

I wouldn't, if it wasn't messing with the fan speeds. The 212's fan is *loud* at 100%.

I know what you're saying about possibly marginally higher idle consumption of the 1700X vs 1700 - it's most definitely not enough to cause it to idle so hot, and just to re-iterate, I can *literally touch* the heatspreader with the cooler attached - it's *cold*. Bear in mind that 40C would be quite warm on skin, 50C is uncomfortably hot, 60C enough for reflex to pull hand away. The IHS is as cold as any other metal object in the room, and not more than about 30C. Even Intel's problematic Haswell IHS didn't have an idle temperature delta of 40C - remember Ryzen is soldered.

Either it's coincidental, or the temperature bug is somehow more pronounced on certain models. Just to add to that, I can unplug the CPU fan, even after sitting for a while it doesn't get warm and the reported value doesn't move - that just doesn't add up if a CPU is somehow burning through enough power at idle to cause a 40C temperature delta (it's not).

A faulty thermal path seems very unlikely, but is still something that concerns me, so I hope a BIOS update can reassure me.
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 12:42
watercooled
I wouldn't, if it wasn't messing with the fan speeds. The 212's fan is *loud* at 100%.

I know what you're saying about possibly marginally higher idle consumption of the 1700X vs 1700 - it's most definitely not enough to cause it to idle so hot, and just to re-iterate, I can *literally touch* the heatspreader with the cooler attached - it's *cold*. Bear in mind that 40C would be quite warm on skin, 50C is uncomfortably hot, 60C enough for reflex to pull hand away. The IHS is as cold as any other metal object in the room, and not more than about 30C. Even Intel's problematic Haswell IHS didn't have an idle temperature delta of 40C - remember Ryzen is soldered.

Either it's coincidental, or the temperature bug is somehow more pronounced on certain models. Just to add to that, I can unplug the CPU fan, even after sitting for a while it doesn't get warm and the reported value doesn't move - that just doesn't add up if a CPU is somehow burning through enough power at idle to cause a 40C temperature delta (it's not).

A faulty thermal path seems very unlikely, but is still something that concerns me, so I hope a BIOS update can reassure me.

Like I said on OcUK the R7 1700X seemed to be much worse than the R7 1700 at idle. They had an H110i which is a 280mm rad AIO water cooler. Their R7 1700X sample was hitting over 50C at idle and their R7 1700 sample was just over 30C.

In fact I read another reply in that thread:

I think its normal with the X? I have 110I but changed the fans to Noctua NF-A14 3000. If you have on auto it will go up and down all the time from 3.4 - 3.8GHz -+ Volts. Mine 1800X have same temp idle 53-60, clocks goes from 3.6-4.1 on 2 cores. When I do nothing. Have ca 60-65C when I game. But have seen someone says the new beta bios for CH6 shows 15+ degrees too much. There is no warm flow from the radiator soo something is up :p And 1700 have - 15-20C just beacuse its a none X.

It seems to be the X versions have much higher idle temperatures. He is talking about an Asus motherboard.

So its being seen with three different motherboard companies and the R7 1700 seems to be fine.
Posted by Xlucine - Sun 12 Mar 2017 13:13
Idling at 1.5GHz is quite high - my 4690K drops to 750MHz under low load. With AMD's fancy auto-overclocking gubbins I'm surprised it doesn't drop the clockspeed lower at idle
Posted by watercooled - Sun 12 Mar 2017 13:40
CAT-THE-FIFTH
Like I said on OcUK the R7 1700X seemed to be much worse than the R7 1700 at idle. They had an H110i which is a 280mm rad AIO water cooler. Their R7 1700X sample was hitting over 50C at idle and their R7 1700 sample was just over 30C.

Can I just say again - my 1700X is not even warm - I can feel it, first hand (literally) - I'm not relying on software readings, calculations, arbitrary offsets or anything. The heatspreader is absolutely, certainly, definitely *not* hot. And like I also said earlier, I can unplug the fan and reported idle temps don't move (and everything, IHS, heatpipes, HS fins, stay cool to touch) - this has absolutely nothing to do with cooler performance.

The only *possible* way the die could be 60C whilst the heatspreader is 20-something would be a serious fault between the die and the IHS, and a fault of that severity under thermal load of a few watts at idle would seriously struggle under heavy load where the CPU is dissipating around 100W.

Idle temps being higher on the X doesn't make complete sense to me anyway - the uncore power should be roughly the same, and the cores/cache will be mostly gated off, but lets just say the higher idle clocks/voltage are causing higher power draw, you're talking a few watts difference, and almost certainly less than double the total idle power of the 1700, so it still fails a sanity check to think the reported temps on the X are anywhere close to correct.

Do you really believe that a soldered-die CPU could reach 50C at idle under a very high performance AIO cooler, and/or that a few extra watts from a higher clocked version would cause 20C higher temps at idle?

@Xlucine - idle clocks are peculiar but not really an issue in and of themselves provided it's by design - modern CPUs like Ryzen use power gating to basically switch off inactive parts of the die anyway. You can see when people lock the clocks of modern Intel CPUs, the idle power consumption barely moves for this reason.
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 14:11
watercooled
Can I just say again - my 1700X is not even warm - I can feel it, first hand (literally) - I'm not relying on software readings, calculations, arbitrary offsets or anything. The heatspreader is absolutely, certainly, definitely *not* hot. And like I also said earlier, I can unplug the fan and reported idle temps don't move (and everything, IHS, heatpipes, HS fins, stay cool to touch) - this has absolutely nothing to do with cooler performance.

The only *possible* way the die could be 60C whilst the heatspreader is 20-something would be a serious fault between the die and the IHS, and a fault of that severity under thermal load of a few watts at idle would seriously struggle under heavy load where the CPU is dissipating around 100W.

Idle temps being higher on the X doesn't make complete sense to me anyway - the uncore power should be roughly the same, and the cores/cache will be mostly gated off, but lets just say the higher idle clocks/voltage are causing higher power draw, you're talking a few watts difference, and almost certainly less than double the total idle power of the 1700, so it still fails a sanity check to think the reported temps on the X are anywhere close to correct.

Do you really believe that a soldered-die CPU could reach 50C at idle under a very high performance AIO cooler, and/or that a few extra watts from a higher clocked version would cause 20C higher temps at idle?

@Xlucine - idle clocks are peculiar but not really an issue in and of themselves provided it's by design - modern CPUs like Ryzen use power gating to basically switch off inactive parts of the die anyway. You can see when people lock the clocks of modern Intel CPUs, the idle power consumption barely moves for this reason.

Well its no point arguing with me - if you have an issue with those two people owning BOTH a R7 1700X/1800X and R7 1700 and saying in BOTH cases,their R7 1700 samples are running much cooler even when using AIO water coolers,then moan at them. Jeez.

Here read the thread:

https://forums.overclockers.co.uk/threads/i-actually-had-a-faulty-ryzen-1700x-i-think-50-to-60c-idle-temp.18771900/

You have a Gigabyte motherboard,the first person has a MSI one and the other one hinted it was an Asus one.

Hi,

Setup is MSI B350 Tomahawk and Corsair H110i.

Upon first boot it was sat at 55c in the bios, I thought it may be a buggy BIOS. But I booted into Windows (on my old install) and saw it was now at 60c in the AMD Ryzen utlity.

Remounted the H110i twice with new paste, no luck, identical temps. Putting it under load was impossible as it hit 70c+ pretty much instantly. It was also very loud.

At this point I really am scratching my head wondering how the H110i could be faulty, wondered if the pump wasn't working etc. I was thinking maybe I could have mounted it wrong, but it really is very very simple to install on the AM4 socket. I saw other people with the B350 Tomahawk reporting normal temps, so I ruled that out.

So I picked up a Ryzen 1700 with the stock cooler.

Firstly I fitted the stock cooler with the 1700x… 62c to 69c at idle! at this point I knew it had to be the CPU.

Fitted the 1700 with the H110i, 31c idle…

So yeah I am RMA'ing the CPU. I have built 100's if not 1000's of PC's and I've never had a faulty CPU. Has anyone else had a faulty Ryzen CPU?

Hope this helps if anyone else has the same issue with stupidly high temps!

If its an issue,send the chip back and say its faulty.

I think its normal with the X? I have 110I but changed the fans to Noctua NF-A14 3000. If you have on auto it will go up and down all the time from 3.4 - 3.8GHz -+ Volts. Mine 1800X have same temp idle 53-60, clocks goes from 3.6-4.1 on 2 cores. When I do nothing. Have ca 60-65C when I game. But have seen someone says the new beta bios for CH6 shows 15+ degrees too much. There is no warm flow from the radiator soo something is up :p And 1700 have - 15-20C just beacuse its a none X.

Stop trying to blame me for your issue,and ignoring that link I posted earlier.

Three different motherboards from three different companies,three R7 1700X/1800X samples and three different people seeing the same issue with the R7 1700X/1800X.

Honestly if it is causing an issue,send the whole lot back and wait another month or so,or just bypass the motherboard fan controller with one of these:

https://www.overclockers.co.uk/zalman-fan-mate-2-fan-controller-oa-000-za.html

I mean even though I was sold on upgrading to socket 1155 at launch I waited a few months,and I avoided the whole B2 chipset issue.

Edit!!

Plus you can always get one of these to manually check the temperatures:

https://www.amazon.co.uk/d/Health-Personal-Care/Non-Contact-Thermometer-Temperature-Measurements-Diagnostics/B00AYUHSM4

That way you will probably get a better indication of what temperatures you are seeing in reality.

Edit!!

Also one more think - temperature is not always a good indication of actually how much heat is being produced. My IB CPU with a low profile cooler ran hot- but the heatsink wasn't that hot either.

Compare to my Q6600 which was high VID and not only ran hot but kicked out a fair amount of heat too,especially if it was overvolted.

In the end if you don't want to send it back,the only way is to get a infrared thermometer gun,calibrate it and see what temperatures are being reported.
Posted by watercooled - Sun 12 Mar 2017 14:33
I'm not blaming you or ignoring anything you said. You're not acknowledging that you're confusing “running much cooler” with “reporting lower temperatures”. You do not physically have a system in your possession, I do - I can feel the heatspreader with my very own hands, and a k-type thermocouple. The CPU case *is not 60 degrees Celsius*, I don't know how much clearer I can possibly make that.

It does not make sense that every 1700X is faulty with regard to thermal path - these CPUs are reporting idle temps higher than load temps of 1700 - how is that even remotely logical? It is not sensible to conclude that fractionally higher idle power draw can account for the substantially higher reported idle temps.

It seems to be pretty well accepted on Reddit that reported temps are not close to being reliable. E.g. https://www.reddit.com/r/Amd/comments/5yy6vl/safe_to_assume_ryzen_master_and_others_are/

I'm not being defensive or ignorant - I'm reporting on plain and simple facts right in front of me. TBH if I get nowhere with Gigabyte support I am tempted to return and go with another platform for now as these issues are frustrating me and I don't have the time to waste on trying to solve them, nor to faff about with replacement CPUs or motherboards.

If I had a way of being completely sure that the CPU *die* is running at acceptable temps, I would just connect the CPU fan to my fan controller and be done with it. That's what's bothering me - the die temperature is an unknown. The only two logical explanations for this situation are:
a) Reported temps are just plain wrong
b) Broken IHS TIM.

But b) seems unlikely given the number of people reporting the exact same problem.
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 14:38
Going further from my previous post - Bagnaj97 had a very hot running Q9300 - we were perplexed why it was doing that.

Only when he went to upgrade his system did he noticed the IHS was not flat.

watercooled
I'm not blaming you or ignoring anything you said. You're not acknowledging that you're confusing “running much cooler” with “reporting lower temperatures”. You do not physically have a system in your possession, I do - I can feel the heatspreader with my very own hands, and a k-type thermocouple. The CPU case *is not 60 degrees Celsius*, I don't know how much clearer I can possibly make that.

It does not make sense that every 1700X is faulty with regard to thermal path - these CPUs are reporting idle temps higher than load temps of 1700 - how is that even remotely logical? It is not sensible to conclude that fractionally higher idle power draw can account for the substantially higher reported idle temps.

It seems to be pretty well accepted on Reddit that reported temps are not close to being reliable. E.g. https://www.reddit.com/r/Amd/comments/5yy6vl/safe_to_assume_ryzen_master_and_others_are/

I'm not being defensive or ignorant - I'm reporting on plain and simple facts right in front of me. TBH if I get nowhere with Gigabyte support I am tempted to return and go with another platform for now as these issues are frustrating me and I don't have the time to waste on trying to solve them, nor to faff about with replacement CPUs or motherboards.

If I had a way of being completely sure that the CPU *die* is running at acceptable temps, I would just connect the CPU fan to my fan controller and be done with it. That's what's bothering me - the die temperature is an unknown. The only two logical explanations for this situation are:
a) Reported temps are just plain wrong
b) Broken IHS TIM.

But b) seems unlikely given the number of people reporting the exact same problem.

Thats the point - two other people have noted the same issues with their R7 1700X/1800X CPUs with different motherboards and the R7 1700 they have,have no issues.

Look at the R7 1700 idle clocks?? The R7 1700X/1800X seem to idle at 50% higher clocks.Even on XS forums they tested the R7 1700 and 1700X and found at certain voltages,the R7 1700 seem to actually draw less power.

The whole point I did tell you the R7 1700 was the better bet going forward. Unless you really need the highest clockspeeds possible,the R7 1700 is close enough,draws less power and seems to have less issues out of the starting block.
Posted by DanceswithUnix - Sun 12 Mar 2017 14:51
watercooled
It does not make sense that every 1700X is faulty with regard to thermal path - these CPUs are reporting idle temps higher than load temps of 1700 - how is that even remotely logical? It is not sensible to conclude that fractionally higher idle power draw can account for the substantially higher reported idle temps.

Is that the temperature reported in the hardware monitoring screen in the BIOS?

I'm out of touch with the circuitry design, but there used to be components on the motherboard that the CPU thermistors were referenced against, and if you didn't know the values you didn't know the offset and scaling values to apply to the ADC reading so you got iffy results. Now, the BIOS is the only software on the planet tied by residing on the same PCB to knowing how your thermal circuitry works. If the BIOS screen reads wrong, I would raise a bug report against Gigabyte.
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 14:55
watercooled
I'm not blaming you or ignoring anything you said. You're not acknowledging that you're confusing “running much cooler” with “reporting lower temperatures”. You do not physically have a system in your possession, I do - I can feel the heatspreader with my very own hands, and a k-type thermocouple. The CPU case *is not 60 degrees Celsius*, I don't know how much clearer I can possibly make that.

Because you are just arguing with me for no reason to blame me for YOUR issue - what has this launch done to people??

The moment I pointed out TWO other people had the same problems,and their R7 1700 samples did not,you are trying to twist the argument to fight with me.

The R7 1700X/1800X have issues,the R7 1700 does not - do you honestly think high temperatures are a big deal for me. 11 years of owning highish performance SFF rigs,and so what? Do you think feeling a heatsink means anything?? I had SB/IB CPUs run hot since I had small cases - the CPU coolers were not that warm.

Plus even you can see the R7 1700 seems to consume less power,run at lower idle power and voltage,etc. Have you not even considered that the R7 1700X/1800X are overvolted to reach the higher clockspeeds - look at what happens to the R7 1700 when it starts getting bumped up to near R7 1700X/1800X speeds.

Just because the review samples are closer we don't know on average what the real difference is,especially with XFR.

We can see from testing that Ryzen is much more efficient at lower clockspeeds.

Compare that to my high VID Q6600 which was at the upper end of the scale for G0 Q6600 samples,it was more like a Q6600 B3,and that was in a Shuttle which confounded things,and apparently it was an issue with that batch.

DanceswithUnix
Is that the temperature reported in the hardware monitoring screen in the BIOS?

I'm out of touch with the circuitry design, but there used to be components on the motherboard that the CPU thermistors were referenced against, and if you didn't know the values you didn't know the offset and scaling values to apply to the ADC reading so you got iffy results. Now, the BIOS is the only software on the planet tied by residing on the same PCB to knowing how your thermal circuitry works. If the BIOS screen reads wrong, I would raise a bug report against Gigabyte.

Its happening on MSI and Asus motherboards too with the R7 1700X/1800X with 280mm rad H110i coolers. The R7 1700 does not have the problem.

Edit!!

You know what I am done talking about Ryzen for a while.

People are getting stupidly defensive about it - even the SOC thing I talked about months ago,in regards to certain reviews who isolated CPU power consumption only,which would paint Ryzen in a worse light compared to Intel and not a single person batted an eyelid.

The moment you mention it now - oh!noes! he is trying to put AMD down,lets fight!!

Then when I try to highlight the temp issue is seen by a few people and not those with the R7 1700,that is another set of arguments.

Second Edit!!

The people EVERYBODY SHOULD BE BLAMING for all of this is AMD. Not me or the fairy god mother! ;)

They have weeks until the end of March to still have hit a Q1 launch - I will never understand AMD as a company,even after 11 years of trying to be fair to them,I still don't understand this mentality of theirs and I never will understand it I think.

Whatever the real reason is even if it is wrong offsets for the X series sensors,TIM,IHS issues,or overvolt problems,etc,they pushed Ryzen out half baked and expected the end users to beta test it for them,especially when they can least afford to,since they are not Intel and have that level of mind share,and it leads to all these arguments on forums. Every AMD launch this seems to happen.
Posted by watercooled - Sun 12 Mar 2017 15:32
Seriously CAT you're being paranoid. At no point have I blamed you for anything, I'm simply disputing your apparent claim that the reported temperatures are logical. That is not your fault, nor something you have any control over. I'm simply disagreeing with what these readings actually mean. As I've said before, I know with certainty the temperature of the IHS - there is no sensible arguing there, it is pure fact measured by myself. I am not ‘just feeling the heatsink’ - for the xth time, I am feeling the heatspreader - the way the 212 cooler mounts, the IHS is large enough that some of it is still exposed and can be touched with a finger, or a thermal probe. I am not, as you're implying, making any assumptions about the thermal resistance of the cooler itself - it is not a part of this logic whatsoever.

And I know what you're saying about the differences between 1700 and 1700X clock speeds/voltages, and I am certainly not ignoring that. I still disagree that this disparity could lead to such significant differences in temps. Without getting into thermal resistance calculations, a fractional rise in thermal dissipation should not lead to a more than doubling of delta-to-ambient with a near-constant thermal resistance (which for our purposes, the heatsink pretty much is). This part is completely illogical. How do you explain that these 1700X idle temps are higher than 1700 load temps based on this logic? The 1700 quite obviously has massively higher load power consumption than the 1700X's idle power consumption! Unless all of these 1700X have broken IHS, it is not a logical conclusion that these are true, linear temperature readings.

I don't understand how I'm twisting an argument - TBH I didn't even realise it was an argument. I'm simply trying to think logically about the evidence presented, and I don't agree with some of your conclusions. I don't even know how anyone could describe me as being defensive??? If I was that concerned about it, or in some fact-hiding conspiracy, I wouldn't be posting about how frustrated I'm getting with stupid bugs like this, would I?

I'm not defending nor putting down AMD. I am annoyed collectively at AMD and Gigabyte at the moment - that may shift if/when I find out who is to actually blame for this. The fact this is also hapenning on other brands of motherboard has the finger pointing more at AMD - even if it's a BIOS issue, AMD should have made more of an effort to ensure stability before release.

Make of that what you will, but seriously, yeah I'm annoyed and frustrated I wasted several hours yesterday and probably more in the future getting this resolved, but I'm not annoyed at or blaming you for anything. Me not agreeing with your own personal conclusions is not something to take personally!

@Danceswithunix: Yes, the BIOS readings are crazy too, and I think software is just reading this value. I have already submitted a ticket to Gigabyte - their response will likely be the deciding factor of whether I keep the system or not.
Posted by DanceswithUnix - Sun 12 Mar 2017 15:34
CAT-THE-FIFTH
Why would it wash out - the chip in an I7 6900K is 230ish MM2 - the R7 series is around 190mm2,is around 20% larger and the X99 motherboards have a much larger chipset too. You know very well that the AMD AM4 platform chipset really does not do that much. So when you have the Intel CPU being 20% larger and when 99% of the chipset functionality is under the same cooler,unlike with X99 OFC it is going to run hotter.

The graph you showed is for the complete system, so it will no doubt be measured at the wall.

Just looking at the Ryzen and nothing else, we see 47W for the system. Well that is after it has been through a ludicrous 1000W PSU, so we are using an entire 4.7% of the rated power available at the PSU, so it is struggling like hell to maintain regulation and the efficiency is pants. Lets assume 50% to make the maths easier.

So at idle, the PSU is using about 24W, the entire rest of the system is pulling about 24W. Of that, say 4W for the SSD, call it 10W for an idle top end graphics card. The motherboard will need some power for things like the Intel ethernet chip, the VRM digi controllers, the fan controllers and it looks like there are some mux chips on there presumably to route the SLI lanes between graphics slots, I have no idea but let's conservatively guess at 5W for the lot.

That would leave about 5W for the CPU itself at idle. Sounds about right to me, and would leave the heatsink stone cold, just as Watercooled describes.
Posted by CAT-THE-FIFTH - Sun 12 Mar 2017 15:46
NVM.

Ryzen is the bestest CPU in the world,the spawn of Keller's loins,it even changes its stance depending on the time-line,tis a wondrous creation it is! ;)
Posted by watercooled - Sun 12 Mar 2017 17:17
I really don't understand your whole reaction to this TBH.

How am I being defensive about anything? *Something* is broken, but I'm not just going to accept it's broken in one specific way when everything in front of me points to something else.

FWIW I also have a power meter connected to my system, and with a fairly old Enermax Modu82+ power supply, I'm getting about 50W idle at the wall. That's with a 2TB HDD, two SSDs, a number of fans, a DVD drive and a 280X connected. Under load e.g. y-cruncher, this raises to about 150W - right in line with what the TDP suggests - given PSU and VRM losses, that's not bad going.

Furthermore, HWmonitor has picked up on some power sensors for the processor - these also seem about right, with multi-core load getting about 10-12W per core, and idle power being around 5W total as DanceswithUnix guessed.

So, 5W at the die, with a stone cold IHS and cooler - exactly as you'd expect. The one thing that does not make sense given the rest of the data is the *reported* temperature. Therefore, I'm concluding that something is wrong with the temperature sensor.

I'm not suggesting anything else, blaming anyone, defending anyone, or implying anything else I've not explicitly written. If e.g. the CPU power sensors were reporting 500W whilst my power meter was reporting 150W, I'd conclude that something was wrong with the sensors, don't you agree?
Posted by watercooled - Sun 12 Mar 2017 21:14
For anyone following the thread wondering about temps, check out this post:BIOS updates for AM4 motherboards

A BIOS update fixed it in that case, now reporting realistic temps and running the fan properly - it seems that Gigabyte are concentrating on their higher-end boards first but I'll be watching keenly for an update for my board. I'll keep you posted.

Another worrisome development was the odd bit of stuttering in Windows, including dropped frames in Youtube, etc. I went through and manually re-installed chipset and GPU drivers to the latest versions, and it *seems* to have been OK all day today since then. I'm not sure if I mentioned but I didn't bother re-installing Windows for the upgrade so I could be to blame for that one. Again, I'll keep y'all posted in case anyone's interested.

BTW, if anyone has any questions about the system, any tests they want me to run, etc, just ask!

As far as recommendations go, I'd say unless you're really in need of upgrading, I'd wait a bit longer. Everything still feels very ‘beta’ at the moment, and while chances are this sort of bug will be ironed out given time, I'd find it annoying to have this as my only system. I was under no illusions about the performance of the system before buying - I took all of the numbers at face value with the hope, but not expectation, that many of them would improve with appropriate patches.
Posted by kompukare - Sun 12 Mar 2017 21:26
watercooled
BTW, if anyone has any questions about the system, any tests they want me to run, etc, just ask!

An open invitation!
Actually, if you are going to be doing any overclocking, I'd be interested in the power consumption.
From some the forum posts I have seen it seems the like of HWiNFO can sense the package power ratings. Now I know since the BIOS temps can't be trusted, that rating might not be accurate, but somehow still think it's more useful than a mains power tester (obviously when overclocking it doesn't know about VRM or PSU losses though).
What I'd like to know is the various frequency @ voltage = watts (and if it's the same setup, temperature too) for as many clocks as possible. Guess this would vary based on the leakage / binning of the chips. No tools to read something like ASIC rating for CPUs like GPUz can for GPUs, is there?
Posted by watercooled - Sun 12 Mar 2017 21:37
I wasn't planning on overclocking (not something I usually do lately) and I have a uATX B350 board so probably not the best system for overclocking. Also I wouldn't be comfortable trying with the current state of the BIOS*, but I might give it a go at some point.

Yep, HWinfo can sense power ratings. In fact it seems Ryzen exposes per-core power ratings:


*That reminds me of another thing - navigating the BIOS is horrid - sometimes keypresses are registered normally, sometimes they're ignored, sometimes they're interpreted as two presses. And the mouse cursor is really slow. It's not a fun experience at the moment.
Posted by kalniel - Tue 14 Mar 2017 09:19
Watercooled, I think this explains your temp thing:
https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1

Specifically, the AMD Ryzen™ 7 1700X and 1800X carry a +20°C offset between the tCTL° (reported) temperature and the actual Tj° temperature. In the short term, users of the AMD Ryzen™ 1700X and 1800X can simply subtract 20°C to determine the true junction temperature of their processor. No arithmetic is required for the Ryzen 7 1700. Long term, we expect temperature monitoring software to better understand our tCTL offsets to report the junction temperature automatically.
Posted by Corky34 - Tue 14 Mar 2017 09:53
Can anyone explain how you get a consistent fan policy by adding 20°C to some CPU's and not others, am i just being dumb. :undecided
Posted by kalniel - Tue 14 Mar 2017 11:12
Corky34
Can anyone explain how you get a consistent fan policy by adding 20°C to some CPU's and not others, am i just being dumb. :undecided

You get the correct ramp up to tjmax. Fan speed should be set as a derivative of %tjmax really, but tends instead to just be set on absolute temperatures.
Posted by scaryjim - Tue 14 Mar 2017 11:54
Corky34
… a consistent fan policy by adding 20°C to some CPU's and not others …

“Consistent”'s a tricky word, isn't it … on it's own its actually meaningless, as you have no idea what that consistency is meant to be measured against.

My suspicion is that they've analysed some 95W rated coolers and found that they're only 95W capable at 100% fan speed, and typical fan profiles either won't hit 100%, or won't hit it at a low enough temperature. So to ensure adequate cooling in those circumstances they have to force the fans to ramp up earlier than usual, which means reporting a higher temperature than usual.

Interestingly AMD's own overdrive tool reports temperatures as margin to tjMax (or at least it does in the version on my laptop!), which is really quite disconcerting - watching a temp gauge rapidly drop towards 0 under load…. ;)
Posted by Corky34 - Tue 14 Mar 2017 12:11
kalniel
You get the correct ramp up to tjmax. Fan speed should be set as a derivative of %tjmax really, but tends instead to just be set on absolute temperatures.
So you may need to explain that as wouldn't setting a higher tCTL just mean you're in effect lowering tjmax, does that mean Ryzen has problems when operating above 75°C rather than the (afaik) 95°C tjmax AMD claims.
Posted by kalniel - Tue 14 Mar 2017 12:17
Corky34
So you may need to explain that as wouldn't setting a higher tCTL just mean you're in effect lowering tjmax, does that mean Ryzen has problems when operating above 75°C rather than the (afaik) 95°C tjmax AMD claims.

If that's the reason it's being done, yes. I would like to see at what reported temps the chips begin to throttle… If it's not the reason, then I'd expect them to throttle at 115C reported.
Posted by watercooled - Tue 14 Mar 2017 18:40
kalniel
Watercooled, I think this explains your temp thing:
https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1

Yep, that sounds correct! Gigabyte seem to be aware of the issue too and newer/beta BIOSes show the correct temperature. I'm just glad there's no fault with the thermal path itself which was the thing concerning me.