facebook rss twitter

ARM to Intel: it’s not just about the transistor

by Scott Bicheno on 5 May 2011, 16:55

Tags: Intel (NASDAQ:INTC), ARM

Quick Link: HEXUS.net/qa5tb

Add to My Vault: x

ARM wrestle

While not exactly unanticipated, Intel's unveiling of its 22nm manufacturing process, featuring a new kind of transistor, seems to have got many commentators rather bullish about Intel's prospects in the mobile device market, where it's currently a virtual bystander.

In a curious piece of timing, market researcher IDC chose today to announce it will be including ARM architecture in its PC market forecasts from now on, and kicked things off by announcing it expects 13 percent of PCs to running chips based on ARM architecture by 2015. Seeing as there are currently no PCs running ARM chips, and IDC doesn't count tablets as PCs right now, that's quite a prediction.

So with all this news and speculation flying around, we thought it was time to have a chat with ARM CMO Ian Drew, who was himself a long-time Intel employee before joining ARM in 2005. While we didn't expect him to have any inside information on Intel - far from it - we wanted to know what the significance of Intel's announcement was to ARM.

"Intel has always innovated through process improvement," said Drew, "But it's not just about the transistor. You have to also consider the architecture, SoC design, the broader ecosystem, and so on."

So Drew isn't contesting the significance of Intel's technological breakthrough. But while a smaller manufacturing process undoubtedly confers power/performance benefits, so does the micro-architecture, the efficiency of the whole SoC, software optimisation, and so on.

We put it to Drew that Intel had said it was a ‘misconception' that ARM's architecture was somehow intrinsically more power-efficient than Intel's. "Fewer transistors means lower power," he countered. "so RISC is inherently lower power." Drew also pointed out that ARM has already announced test chips at 22 and 20nm already, with foundry partners TSMC and GlobalFoundries also working on those processes, and that IBM is already working on 14nm.

Moving on to the IDC report, Drew took it with a pinch of salt. "If it happens we'll be very happy, but forecasts are always wrong in some shape or form," he said. "The ARM business model is not to favour one form over another." In other words, whether you call it a PC, a tablet or a smartbook, if it's got ARM IP in it then ARM's happy.

The implications for the low power market were the main theme of the Q&A after Intel's announcement yesterday. While Intel itself didn't focus on mobile devices much in the formal presentation, it clearly feels 22nm plus shiny new transistor can be a real difference-maker in bid to take on ARM in mobile devices.

But we can't fault Drew's assertion that it will take more than just technological innovation for Intel to convince the mobile device ecosystem to move away from ARM - just ask MIPS. The biggest obstacle Intel has to overcome is momentum. The likes of Qualcomm, TI and NVIDIA already have relationships with mobile OEMs, and the trend is for the bigger device-makers to license ARM's technology in order to design their own chips.

So Intel probably has to not only match, but significantly beat ARM's technology offering before mobile OEMs will even consider moving away from ARM. And with the likes of Apple, Samsung, and maybe more already making their own SoC's the number of potential customers may be shrinking.

 



HEXUS Forums :: 11 Comments

Login with Forum Account

Don't have an account? Register today!
Nice to hear that ARM is also working on 20NM and 22NM chips too!! :)
I'm quite looking forward to seeing ARM chips in desktop computers. Even the chips available now would be more then enough for most people, coupled with an appropriate OS of course - it doesn't make sense for the OS to use more resources than any applications likely to run on it.
Me too, atom is nice and all, but x86 + low power is a bit of an oxymoron.
aidanjt
Me too, atom is nice and all, but x86 + low power is a bit of an oxymoron.

Considering that to make an x86 chip these days they are internally RISC then use a hardware decoder to convert from x86 to the chip's internal instructions thay are at a disadvantage.
It shouldn't be about the instruction set TBH. It should be about the compiler.
Meh, moving to internal RISC actually improved power consumption by removing dozens of lesser used instruction logic units and being able to improve the efficiency of the remaining ones. But yeah, sufficiently generic code should easily compile for any target architecture, and the need for a gazillion specific instructions is long since dead. Modern compilers already do a pretty good job of optimising, too.