facebook rss twitter

PCI Express

by Parm Mann on 20 October 2008, 00:00

Quick Link: HEXUS.net/qarsm

Add to My Vault: x

Overview

A motherboard's main job is to act as a conduit between the various hardware elements that make up a PC. It needs to be able to link the desired CPU(s), system memory, graphics card, hard drive(s), and add-in cards and enable them to work in harmony. Thinking of the way in which a motherboard is laid out, the central brain of any motherboard, the chipset, usually split into north and southbridges, is connected to the rest of the system via a number of links. It's some of these links/buses that have been improved over the last 18 months or so since the introduction of the PCI-Express architecture.

History

Up until the emergence of PCI-Express, motherboards' chipsets were connected to one another via, usually, proprietory links that allowed data to flow between parts of the system. The northbridge was connected to the CPU via some form of front-side bus and, on the other side, to system memory at varying speeds. Finishing off a trio of high-speed buses was a link to a discrete graphics card, up to AGP 8x, which offered a point-to-point 32-bit, 66MHz connection with bandwidth totalling 2.13GB/s (32x66x8).

The southbridge was hooked up to the motherboard's storage subsystem and, in terms of expansion, most boards offered up to 6 32-bit 33MHz slots that were run via the PCI bus. Doing the basic maths tells us that the PCI bus offered up to 132MB/s bandwidth. The inherent problem, though, was one of sharing, as all devices attached to it had to share this 132MB/s. The quoted bandwidth was fine when devices, even when run concurrently, required low bandwidth to function, but, over time, add-in cards or discrete board-mounted ASICs' bandwidth requirement has grown considerably.

Newer bandwidth-eating technologies like Gigabit LAN, FireWire support, discrete SATA support, and RAID cards can each, theoretically, swamp PCI's bandwidth quota individually, let alone collectively. To get around this, chipset designers had to architect southbridges with an increasing number of the aforementioned technologies already amalgamated on-chip. This, however, was just skirting the obvious shortcomings with the established PCI bus protocol. What was needed was another type of bus that would be both smarter and have the ability to scale as devices and board features became faster over time. For a time the PCI bus' longevity was thought to lie with PCI-X (not be confused with PCI-Express), which was simply a doubling of the bus speed to 266MB/s, achieved by increasing the bus width to 64 bits instead of 32. That, though, was simply a stopgap measure to keep the status quo intact.

Championed by Intel and debuting for the consumer-level market in 2004 came the long-awaited successor to the ailing PCI bus interconnect. Enter PCI-Express.

PCI-Express

Unlike the shared-bus nature of PCI, PCI-Express (a.k.a. PCIe) is a serial, point-to-point connection that has the added benefit of being bi-directional in nature. The point-to-point, dedicated connection between any pair of devices is referred to as a link, and the connection itself as a lane. The intrinsic beauty of PCIe is that each lane can transmit, bi-directionally, 250MB/s. That already makes it faster than a 32-bit PCI bus and takes away the need for devices to share a single bus. Further, PCIe lanes can be grouped together for transfers at higher speeds and a x16 lane, made up of 16 lanes, has the ability to signal data at 4GB/s, again bi-directionally, which beats out the 2.13GB/s afforded by AGP.

Therefore not only is PCIe more elegant than PCI and AGP, it can be made to be faster than both, and as discrete graphics card become faster and faster, the ability to move masses of data around from CPU and main memory becomes increasingly important. Taking this into account and appreciating that PCIe is based on a bus architecture, a x16 PCIe lane makes for a better conduit than AGP, not only in terms of sheer bandwidth but also with respect to the bus' ability to run more than one card in a single system (SLI and CrossFire, anyone?) The bi-directional nature of PCIe also makes it a decent candidate to take the place of a third system bus, the interconnect between chipset bridges. That's not to say that the PCI/AGP mainboard architecture simply isn't working well right now; it is, but the escalation of hardware speeds is such that another inteconnect system is needed, and that's PCIe in a nutshell.

General application

Intel first brought PCI-Express to consumer-level motherboards in 2004 with its 900-series of chipsets. Intel also realised that whilst PCIe was better than the combined talents of the incumbent PCI and AGP buses, motherboard simply couldn't eschew PCI immediately; too many users and ASIC companies had money invested in PCI-based hardware, so the first foray into modernising the archaic PCI architecture was an exercise in compromise. Since then, NVIDIA, VIA, SiS, amongst others, have designed chipsets that take advantage of the present and future benefits provided by PCIe, and it's now become a common sight on all modern chipsets for Intel and AMD's processors. Motherboards have a fixed number of total PCIe lanes available, ranging from around 20, and most 'boards will contain at least a single x16 lane that's usually reserved for graphics card usage. The remainder of lanes are designated for either inter-chipset connectivity or for the relevant buses pertaining to hardware expansion.

In conjunction with general chipset re-design, the suitability of PCIe, in x16 form, as a graphics card bus has seen industry heavyweights NVIDIA and ATI consequently shift their GPUs' interfaces from AGP to PCIe. Indeed, look at any etailer's discrete graphics card catalogue and PCIe-based cards will now outnumber their AGP counterparts, and the vast majority of new GPUs coming out the fabs are now packaged in PCIe form. The downside for the enthusiast is the barriers of entry for a full PCIe-based motherboard. Coming from a decent AGP-based system and wanting to 'upgrade' to PCIe, you will need to invest in not only a new motherboard but also in a new graphics card, unless you're prepared to opt for a one of the few chipsets that amalgamates PCIe for discrete hardware and AGP for graphics.

Motherboard implementation

If we take a look at an Intel D955XBK as an example of a PCI-Express-based motherboard, we see that the two long-ish ports to the left of the northbridge support dual x16 slots, intended to be used with PCIe-based graphics cards. The lane-building and true bus nature of PCIe has opened up the possibility of GPU manufacturers to design multi-GPU setups that just weren't possible with AGP. NVIDIA took the first step with its SLI technology and ATI has followed with its version, named CrossFire, and both require the presence of two x16 (lengthwise) slots In this case, the D955XBK is a CrossFire-certified motherboard and the second x16 slot, to the left, when used concurrently with the right-hand one and with two compliant ATI CrossFire-based cards, offers up some lovely multi-GPU fun.

Most motherboards with dual x16 physical slots remap the second slot down to PCIe x8 (that's still 2GB/s, bi-directionally), although the newest iteration of Intel i955x chipset-based boards now offer true dual x16 support. Continuing the theme of PCIe as a means of running multi-GPUs, ATI's CrossFire and NVIDIA's SLI both require certified motherboards on which to run two or more cards concurrently. It's a company-specific and not PCI-orientated issue. Motherboards with a single x16 slot, which tend to be the majority, will only support a single card, obviously

PCIe speeds range from x1, x2, x4, x8, and, of course, x16, with each number representing the number of bi-directional lanes present in the link. In terms of hardware expansion, a x1 lane, shown by the small slot to the right of the three regular PCI's, will accommodate discrete cards that are slowly coming to market. Often, motherboard designers will take up x1 lanes by integrating a PCIe-based ASIC on the board itself, usually along the lines of SATA RAID (Silicon Image Sil3132, for example), Gigabit Ethernet, or FireWire support.

The future

The introduction of PCI-Express has remedied the bandwidth and architectural limitations imposed by the old PCI bus and AGP interface. Thanks to its multi-lane nature and as a suitable bus for graphics cards, it's also opened up the way for multi-GPU goodness not seen since the days of 3dfX and its PCI bus-sharing SLI. If you're contemplating a new system, wish to build it yourself and want it to be as future-proof as possible, PCI-Express really is the only way to go.


Sponsered by SCAN