HEXUS Forums :: 24 Comments

Login with Forum Account

Don't have an account? Register today!
Posted by Boon72 - Mon 29 Jun 2020 11:22
more money grabbing with the motherboard change as well, might be time to look at AMD who seem to be able to improve their CPU's whilst keeping the same pins and therefore the same motherboard.
Posted by 3dcandy - Mon 29 Jun 2020 12:01
with 1700 pins the actual socket cost and cpu cost is going to be forced to be high simply by component count

If this is not a huge increase in performance then it will not be received well
Posted by SpeedyJDK - Mon 29 Jun 2020 12:51
Since the release of 75mhz Pentiums with errors. It's been a wave of errors and bugs and poor performance for intel. The last 3 years, they totally sunk. Will take some time, if they ever manage to come up with a competing product again. And if they do, do we wanna go back ?
I don't need an Intel product to slow down my machine again.
Posted by kompukare - Mon 29 Jun 2020 13:08
They've been on 1150-1200 pins for all the dual channel CPUs since Nehalem in 2008 and even the short-lived triple channel CPUs were only 1366 pins.
500 extra pins is not something Intel would do lightly though, so why now?
Quad channel? Tripple channel? More power planes?
This might mean new CPU coolers too.
Posted by Tunnah - Mon 29 Jun 2020 13:34
Seems a bit weird how it's DDR5; it's not even minimally available yet. Normally servers use it for 2 years or so before seeing it on desktop
Posted by JayN - Mon 29 Jun 2020 13:54
With the the addition of the new AMX instructions, they are planning to do a lot of tiled memory processing for ai. Convolution parameter requirements keep doubling every few months, we are told, so it would be a big win to have a separate external memory busses for parameters, program, data … like the DSP chips have provided for a long time. That's my guess for the extra pins.

They also need to beef up the PCIE4 pin count so they have enough pins for both the PCIE4 SSD and PCIE4 GPU.

btw, since their Sapphire Rapids and Ponte Vecchio GPUs will use PCIE5/CXL in late 2021, why would we not expect GPUs and desktop cores and GPUs with PCIE5 during the same timeframe? I don't recall seeing a roadmap showing Alder Lake with PCIE4 or PCIE5.
Posted by philehidiot - Mon 29 Jun 2020 14:24
Boon72
more money grabbing with the motherboard change as well, might be time to look at AMD who seem to be able to improve their CPU's whilst keeping the same pins and therefore the same motherboard.

Yes and no. Bear in mind the recent debacle over motherboard support. CPU sockets =/= guaranteed compatibility. It depends on how often you plan on swapping CPUs, but an expensive, high end mobo might only support new CPUs for 3 years from birth. So, if you buy a mobo towards the end of its run, you might find in 2 years it ceases to be compatible with new CPUs even though the socket matches.

The physical socket is only a set of pins and there are a lot of technical deeliemabobs to support each CPU. You don't necessarily know what those deeliemabobs are going to be in 3 or 4 years time and the last thing a company like AMD needs is to restrict the performance of a new generation of CPUs and possibly lose out to Intel, just for the small number of people who have a 4 year old mobo and want to upgrade.

There are some advantages to Intel's approach in this regard. For people who are buying expensive, high end CPUs, you're pretty certain to need to replace the mobo when you upgrade (assuming the longer upgrade cycle). The people who benefit from this are those who buy a motherboard and CPU but can't afford / commit to an expensive CPU. So they'll buy cheap to get the build going and then upgrade in a year or two as finances permit. They just need to be careful that the mobo they have chosen is going to have vendor support as well as AMD support. AMD might support the socket, but the mobo manufacturer needs to release BIOS updates for your specific product.

EDIT: Intel's approach certainly removes uncertainty and complexity around support. It would be very easy for someone learning their way around the PC world to look at a motherboard for their first build, see “AM4” and not realise their chosen CPU just won't work. AMD's approach is more consumer friendly, but opens up some extra pitfalls for the uninitiated. Problem there is that if you make something like the custom PC market have a higher and higher barrier to entry, it will eventually become unprofitable. Then we're back to the idea of CPUs soldered onto motherboards along with RAM and it all sold as a bundle.
Posted by 3dcandy - Mon 29 Jun 2020 14:33
philehidiot
Yes and no. Bear in mind the recent debacle over motherboard support. CPU sockets =/= guaranteed compatibility. It depends on how often you plan on swapping CPUs, but an expensive, high end mobo might only support new CPUs for 3 years from birth. So, if you buy a mobo towards the end of its run, you might find in 2 years it ceases to be compatible with new CPUs even though the socket matches.

The physical socket is only a set of pins and there are a lot of technical deeliemabobs to support each CPU. You don't necessarily know what those deeliemabobs are going to be in 3 or 4 years time and the last thing a company like AMD needs is to restrict the performance of a new generation of CPUs and possibly lose out to Intel, just for the small number of people who have a 4 year old mobo and want to upgrade.

There are some advantages to Intel's approach in this regard. For people who are buying expensive, high end CPUs, you're pretty certain to need to replace the mobo when you upgrade (assuming the longer upgrade cycle). The people who benefit from this are those who buy a motherboard and CPU but can't afford / commit to an expensive CPU. So they'll buy cheap to get the build going and then upgrade in a year or two as finances permit. They just need to be careful that the mobo they have chosen is going to have vendor support as well as AMD support. AMD might support the socket, but the mobo manufacturer needs to release BIOS updates for your specific product.

EDIT: Intel's approach certainly removes uncertainty and complexity around support. It would be very easy for someone learning their way around the PC world to look at a motherboard for their first build, see “AM4” and not realise their chosen CPU just won't work. AMD's approach is more consumer friendly, but opens up some extra pitfalls for the uninitiated. Problem there is that if you make something like the custom PC market have a higher and higher barrier to entry, it will eventually become unprofitable. Then we're back to the idea of CPUs soldered onto motherboards along with RAM and it all sold as a bundle.

I think that is all well and good but Intel have just released a new socket and chipset and this will be replaced next year, so less than a year possibly is (quite rightly) seen as a bit of a joke…

it *may* be a great move, but again it may not…
Posted by 3dcandy - Mon 29 Jun 2020 14:43
JayN
With the the addition of the new AMX instructions, they are planning to do a lot of tiled memory processing for ai. Convolution parameter requirements keep doubling every few months, we are told, so it would be a big win to have a separate external memory busses for parameters, program, data … like the DSP chips have provided for a long time. That's my guess for the extra pins.

They also need to beef up the PCIE4 pin count so they have enough pins for both the PCIE4 SSD and PCIE4 GPU.

btw, since their Sapphire Rapids and Ponte Vecchio GPUs will use PCIE5/CXL in late 2021, why would we not expect GPUs and desktop cores and GPUs with PCIE5 during the same timeframe? I don't recall seeing a roadmap showing Alder Lake with PCIE4 or PCIE5.

All good points… but it's another chipset and socket in an extremely short timeframe…
Just seems Intel can't stop the bad news express right now
Posted by QuorTek - Mon 29 Jun 2020 15:30
Please Intel, who is running your guys QA?!?!? And who designs this?!?!?

I it is so many times less efficient having to replace the Motherboard every time you guys make something new… who is it in the Intel Department who did not get the memo.. and why has the people reponsible not been fired yet for costing the company money… and worse… and much much worse.. hurting the potential customers.
Posted by duc - Mon 29 Jun 2020 15:36
Could be inetraging one of those Xe tile graphics
Posted by edmundhonda - Mon 29 Jun 2020 17:17
big.LITTLE on desktop sounds like the least exciting new USP they could have implemented.
Posted by 3dcandy - Mon 29 Jun 2020 18:00
edmundhonda
big.LITTLE on desktop sounds like the least exciting new USP they could have implemented.

and least useful for many as well
Posted by Tabbykatze - Mon 29 Jun 2020 18:33
edmundhonda
big.LITTLE on desktop sounds like the least exciting new USP they could have implemented.

That's the big thing i get from this, why would you want hybridisation on the desktop.

I'm going to have to see it to believe it is useful.
Posted by John_Amstrad - Tue 30 Jun 2020 07:32
No lessons learnt. It's a shame.
Posted by cheesemp - Tue 30 Jun 2020 10:34
Tabbykatze
That's the big thing i get from this, why would you want hybridisation on the desktop.

I'm going to have to see it to believe it is useful.

So they can show nice little graphs that show it sips power most of the time but still does well in benchmarks. Never mind that you could just design the higher powered cores to downclock and get a similar affect. Its not like a mobile where chip size and heat really matters. It shows they only care about laptops where this might be more useful.
Posted by Cinetyk - Wed 01 Jul 2020 13:41
Another socket change so soon after LGA1200? Must be frustrating for those who recently upgraded to 400's boards.
Posted by CAT-THE-FIFTH - Wed 01 Jul 2020 15:26
Intel desktop consumer socket CPUs,are technically APUs made for laptops,ie,why they have an IGP. So the core arrangement makes more sense for a laptop.
Posted by Xlucine - Wed 01 Jul 2020 21:46
Recent intel sockets haven't had an excess of pins (e.g. AM4 has 1331), so more pins than 1200 is reasonable. I don't know why they would need an extra 500 though - a full 16x PCIe slot only needs 64 pins to carry the data.

This is another nail in the coffin of 10th gen though - reheated skylake again, on a platform it can't properly use, is just a thinly veiled attempt to get us to buy two £500 CPUs in short order

edmundhonda
big.LITTLE on desktop sounds like the least exciting new USP they could have implemented.

I guess having some lower power cores for background tasks will free up thermal headroom to run the cores doing useful work to boost that little bit higher? Definitely a solution looking for problems in desktop

ETA: I was wondering how the consumed energy (and therefore CO2 emissions) is balanced between use and manufacture of a computer, turns out it's probably dominated by the manufacture (62-70%):
https://www.sciencedirect.com/science/article/abs/pii/S0959652611000801?via%3Dihub

The study is quite old, but a cursory google can't find anything more recent. Based on this, I bet the extra CO2 emissions from manufacturing the extra die area for the low power cores outweighs any potential savings
Posted by thewelshbrummie - Thu 02 Jul 2020 02:08
Interesting to see the big jump in pin count - if that means a physically larger socket then pretty much every cooler on the market won't fit anymore, unless the pin pitch can be reduced.

philehidiot
Yes and no. Bear in mind the recent debacle over motherboard support. CPU sockets =/= guaranteed compatibility.

For AMD, that's true.

Not the case on Intel platforms starting with Sandy Bridge, match chipset with socket and you're good to go with very few exceptions (the 2 Desktop Broadwell CPUs weren't supported on original X87 Haswell boards but they still got Devil's Canyon/Haswell Refresh and you could argue that 1151 v1 & v2 being the same socket). On AMD that is definitely not the case. That and every non-F CPU (and every pre-9th gen CPU) also has an iGPU.

I've read a few articles (possibly on Hexus, possibly elsewhere) that Intel's rational behind a socket refresh every 2 years is to avoid the situation that AMD had to deal with last month with Zen3 on B450 boards. People may not like it from an upgrade standpoint but the logic is solid in that it's harder to make mistake on buying a CPU that fits the socket but a motherboard with an unsupported chipset.

Ultimately it comes down to how often users are likely to want to swap out a CPU on an existing motherboard. For me, it's irrelevant as I usually only upgrade my CPU in 5 year cycles so a motherboard upgrade is pretty much mandatory regardless of platform choice. I get that most users on Hexus are far more likely to upgrade CPU on a regular basis and so CPU upgrade paths are more important, just saying that it's not crucial for everyone (and I'm guessing that for most businesses they'd just swap out the whole PC)

3dcandy
I think that is all well and good but Intel have just released a new socket and chipset and this will be replaced next year, so less than a year possibly is (quite rightly) seen as a bit of a joke…

A joke that they're doing the same thing that they've done since 2011? It's arguably easier to plan ahead knowing what you're getting into, whereas AMD seem to be totally unprepared for the backlash over Zen3 support on B450.

That and with Ice Lake having only just been launched, Rocket Lake CPUs most likely won't launch until next year and will work on current gen motherboards, just like every pair of CPU generations since Sandy Bridge (except Broadwell). Alder Lake doesn't make sense until 2022.

Like I said, it's a clear strategy and there's not bait & switch. e.g. if you buy a 10600K now and upgrade to a 11700K (assuming that's what Intel calls the i7 Rocket Lake CPU) then it will work with a BIOS update - but a hypothetical 12800K will need a new motherboard. AMD seem to be going down the route of allowing current and 2 future CPU gens per chipset on a rolling basis, so long as the socket doesn't change - it's more flexible but not as much as people ted to make out.

Cinetyk
Another socket change so soon after LGA1200? Must be frustrating for those who recently upgraded to 400's boards.

See above - it's been this way on Intel for nearly 10 years and it's clear how they operate.
Posted by Xlucine - Thu 02 Jul 2020 15:32
thewelshbrummie
Interesting to see the big jump in pin count - if that means a physically larger socket then pretty much every cooler on the market won't fit anymore, unless the pin pitch can be reduced.

There's a fair bit of space between the socket and the holes on LGA1XXX, they probably could stick with the same mounting holes if they tried. Most coolers also support LGA2011, so at worst it'd just be a case of getting the new mounting bracket for the vast majority of coolers
Posted by exotictechnews - Fri 03 Jul 2020 06:36
Helpful
Posted by persimmon - Tue 07 Jul 2020 08:35
So thats 500 extra pins? 484 for data processing and 16 reserved for the CIA.
Posted by JayN - Tue 22 Sep 2020 21:33
Xlucine
I guess having some lower power cores for background tasks will free up thermal headroom to run the cores doing useful work to boost that little bit higher? Definitely a solution looking for problems in desktop

Intel used Tremont cores in their P5900 family, with new instructions to reduce latency of datapath acceleration. I wonder if these Gracemont Atom cores will be used in the same way, which would make sense if Intel is truly thinking the future is higher speed edge connections to remote Exascale processing.

Intel also envisions increased use of parallel heterogeneous multi-core processing, with local GPU/NNP/FPGA accelerators. This might similarly make use of atom cores for DDIO.

It will be interesting to see what Intel has planned, since Alder Lake is described as being the high performance application of their hybrid technology vs the low power application that they demoed with Lakefield.