A quickpath to success
QuickPath interconnect and multi-socket musingsHaving a fast, efficient, aggressively-prefetching memory subsystem is good, of course, but it needs to be complemented with other high-speed links in the processor topology for it to make real sense. That's why Core i7 adopts a single point-to-point QuickPath Interconnect (QPI) - previously known as CSI - to link in with the rest of the system.
The QPI bus on the higher-speed Core i7 CPUs will operate at 6.4GT/s, yielding 25.6GB/s of bandwidth via a 20-bit link, whilst other models will use a 4.8GT/s link ( 19.2GB/s). Here is where Core i7 really wins over Core 2, because the latter pumped not only memory read/writes through the FSB but also used its bandwidth quota (maximum 12.8GB/s) for channelling data to the peripherals.
Further QPI links will be integrated in Nehalem processors used in multi-socket systems (Xeons) where links rise in accordance with the number of CPUs, so a 4P system would use 4 QPI links, meaning that each processor's allocated memory can be used by another by running over a QPI link and the processor itself.
If you think you've seen this before, you have, AMD's been using a similar HyperTransport system, with variable links in multi-socketed systems, for a while now.