facebook rss twitter

ITRS roadmap predicts end of process miniaturisation by 2021

by Mark Tyson on 26 July 2016, 10:31


Quick Link: HEXUS.net/qac4w3

Add to My Vault: x

The last ever ITRS (International Technology Roadmap for Semiconductors) roadmap has been published. An interesting prediction for the industry within the pages of the report, published earlier this month, is that "the transistor could stop shrinking in just five years," reports IEEE Spectrum. The ITRS is a US trade group which represents such IT giants as IBM and Intel.

According to the president of the Semiconductor Industry Association (SIA), John Neuffer, the roadmap "provides key findings about the future of semiconductor technology and serves as a useful bridge to the next wave of semiconductor research initiatives". As miniaturisation hits its limits the chip making industry is expected to turn further to 3D structures in transistors and circuits. It is noted that the memory industry has already gone big in 3D and monolithic 3D integration holds still lots of promise, as recent news of BeSang Inc's 3D Super-NAND shows.

Already process miniaturisation has started to become so difficult only a few highly successful companies have managed to hang on. IEE Spectrum says this difficulty has resulted in the 19 companies developing and manufacturing leading-edge transistors in 2001 diminishing to just four in 2016; Intel, TSMC, Samsung, and GlobalFoundries. This fiercely competitive quartet of companies doesn't want to work together with the ITRS - each has their own roadmaps now – that's why the latest ITRS roadmap will be the last.

Moore's Law turned 50 in April 2015 but with this kind of roadblock in the way of miniaturisation it doesn't look to have such a long lifespan left if the industry relies only on miniaturisation to go forward. Slightly earlier last year the Law was predicted to could keep up trends until 7nm was reached, after that we are entering into the unknown for industry progress. Late last year the IRTS was looking at possible tech to keep Moore's Law on tack beyond process shrinkages and pushed forward ideas such as "new kinds of transistors and memory devices, neuromorphic computing, superconducting circuitry, and processors that use approximate instead of exact answers."

HEXUS Forums :: 10 Comments

Login with Forum Account

Don't have an account? Register today!
So, is it finally time to adopt BTX cases with better cooling so we can cram more chips is? Because if the chips stop shrinking, then people's hunger for more power will mean computers need to get bigger. I look forward to playing games on a PC the size of a Cray X/MP :D
3D technologies, including layers of computation embedded in memory, will be great. The only problem is it's also expensive to manufacture. I guess this becomes viable if feature size shrinks hockey-stick in cost.

Fabrication efforts aside, the other problem is power - we struggle with the thermal envelope of a single layer of transistors, but when we have multiple layers, there's potentially a lot more power to dissipate, and some of it is stuck in the middle of the chip.
We might push more compute to remote services. Phones and devices just become thin clients.
What does that solve? You still want to achieve performance improvements on the compute, wherever it is. In a DC you get the benefit of things like high ambient temperatures and liquid cooling, but that only gets you so far. If you're not able to double your compute capability in the same physical space every couple of years, then you're in a sticky situation.
We might push more compute to remote services. Phones and devices just become thin clients.

And MULTICS is born, welcome to the 1960's :D

Part of me thinks I have heard this all before too many times to see scaling get rescued by things like immersion lithography. Does sound more like the end of the lithography road this time though, just not enough atoms left.

There must still be some tricks to pull, SOI 5nm may be better than 5nm for example if it can be done.

OTOH, phones etc should now be fast enough with current technology. Perhaps people should just write better software, you can get magnitudes better performance if you go back to writing in C and tiling your code to work in L1 cache rather than using network java interfaces and wasting most of your cycles converting data to XML and back.