vacancies advertise contact news tip The Vault
facebook rss twitter

KiloCore 1000-core chip can execute 1.78 trillion IPS

by Mark Tyson on 20 June 2016, 11:01

Quick Link: HEXUS.net/qac3tr

Add to My Vault: x

A team of researchers at the University of California, Davis, Department of Electrical and Computer Engineering have revealed an engineering sample of a 1000-core processor chip. Dubbed the KiloCore, this chip contains 621 million transistors, and is capable of executing 1.78 trillion instructions per second, yet is hugely power efficient and can be powered by a single AA battery.

The KiloCore chip was fabricated by IBM using 32nm CMOS technology. Previous multiple processor core chips have topped out at around 300 cores, says the team. However, it's not only the chip with the most cores ever, it is also "the highest clock-rate processor ever designed in a university," says research project leader Bevan Baas, professor of electrical and computer engineering.

The researchers say that "each processor core can run its own small program independently of the others," which is more flexible than a SIMD approach and results in fast parallel computation with high throughput and lower energy use. Improving energy use further is the ability to vary the clock and shut down each core independently. The maximum core clock frequency is 1.78GHz and at this speed the KiloCore can compute at 1.78 trillion instructions per second. Clocked lower, the researchers boast the KiloCore efficient enough to execute 115 billion instructions per second "while dissipating only 0.7 Watts, low enough to be powered by a single AA battery".

During the presentation of a working KiloCore sample, at the 2016 Symposium on VLSI Technology and Circuits in Honolulu late last week, we learnt that the researchers have readied a compiler and automatic program mapping tools for use in programming the chip. Applications already developed include wireless coding/decoding, video processing, and encryption. The KiloCore is particularly attractive for scientific data applications and data centre record processing; as such tasks require the processing of large amounts of parallel data.



HEXUS Forums :: 12 Comments

Login with Forum Account

Don't have an account? Register today!
Wonder how many Transputers you could get on a chip these days.
DanceswithUnix
Wonder how many Transputers you could get on a chip these days.
A lot.

One could have a play with an FPGA and http://www.opentransputer.org/

Thinking about processors in general: https://www.cs.bris.ac.uk/~dave/benes.pdf

And if you can get hold of James Hanlon's thesis, he looks at some of the physical aspects of cramming cores in, I think: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.652048

As for me, I'm currently working on getting 32 picoRV32 cores into a network-on-chip in an FPGA, with the possibility of expanding that out to get something in the region of a couple of hundred cores. As an ASIC, 1K cores might be feasible.

…and they're all a nightmare to program for, in the conventional sense ;)
But can it run Doom?
Steve
…and they're all a nightmare to program for, in the conventional sense ;)

Which is the thought I had which made me think “Haven't we been here before” and reminded me of Transputers :D

I think I would want a modern take on the idea though. Proper RISC design, 64 bit, rather than the funky stack based instructions of the Transputer. I don't think the signed address space worked out that well either, certainly not for me as a C programmer. But the idea of a simple core with comms links tied in seems still valid, just these days you could make most of those links into a FIFO to talk to other on chip cores and only go serial when heading off chip.
^Crysis :D