facebook rss twitter

Review: NVIDIA's nForce4

by Ryszard Sommefeldt on 19 October 2004, 00:00

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qa3z

Add to My Vault: x

Features Overview - New disk controller

The number of times I've heard "nForce4 is just nForce3 with PCI Express" recently is bordering on the insane. At its most very basic, yes, but there's a fair bit more than that. The first thing worth covering is the new disk controller.

Disk Controller

nForce3 250's disk controller was a 2/4 port SATA150 controller with support for pair of PATA (UDMA-133) channels. The 2/4 port SATA designation means it had two SATA ports natively on the bridge with support for two more using external PHY (physical interface) circuitry.

RAID was possible on the 250's SATA and PATA controllers, with 0, 1, 0+1 and JBOD RAID levels available natively in hardware. The disk controller on nForce3 250 has no support for command queueing unlike Intel's ICH6 and controllers from the likes of Promise and Silicon Image.

nForce4 takes that existing controller and throws it in the bin. Starting from the ground up using the next generation of SATA technologies, NVIDIA have crammed quite the disk controller into the bridge silicon. Here's the basics.

Disk controller
Disk controller - NVIDIA supplied diagram

SATA300

SATA150 means maximum transfer speeds per channel of 150MB/sec. With a 1500MHz signalling clock for SATA150 and a single 1bit transfer per clock, that's 1500Mbit/sec of possible bandwidth. With SATA tranfers using 20% of the bandwidth available for maintaining a correct signal using data encoding and correction, that's 1200Mbit/sec. 8 bits in a byte and you've got 150MB/sec (1200 / 8) maximum bandwidth from SATA150.

The second generation of SATA interfaces, something NVIDIA calls SATA 3Gb/sec, is implemented in nForce4. More likely to pop up as SATA300, that's simply a 3000MHz clock for the data stream. SATA embeds the clock tick in the data stream, allowing devices on both ends of a SATA channel (the host controller on one end, your disk on the other) to synchronise with each other while the channel and data stream is idle.

3000MHz at 1bit per clock is 3Gb/sec. Using the same 20% of bandwidth for stream timing and correction, that's 300MB/sec maximum bandwidth. That's a per channel bandwidth too. With four channels available on nForce4 (see the diagram above, each controller supporting two channels), natively, each with a full 300MB/sec channel between the disk and the host, that's the fastest four port SATA controller implemented in consumer hardware to date.

Command Queueing

The biggest buzzword in SATA gets full support in nForce4's disk controller. With a supporting disk, or disks, the nForce4 knows (at its most basic) how to reorder commands sent to the disk in order to take advantage of the rotational speed and nature of current hard disks. Say the controller wants four pieces of data, each at different parts on the disk, and that they won't appear on the read heads of the disk in the order 1-2-3-4. If they're going to arrive at the read heads in the order 2-4-1-3, the host controller on nForce4 will ask the disk for data in that order, enhancing performance.

RAID, with a twist

The same RAID levels as the controller on nForce3 250 are supported. But what you also get is hot spare capability. Think about hot spares like this. You build a RAID0+1 array (striping and mirroring, for performance and data security) on the controller using all four SATA ports. On the PATA controller you assign a hot spare disk. It sits idle and you lose the capacity it would afford you as part of the arrays. If any disk on your RAID0+1 array fails, the spare disk on the PATA port steps into the breach to patch up the damage. Then the controller notifies the user using driver as to which disk on which particular port, graphically, has failed, allowing you to replace it. As soon as you replace the disk, the array is rebuilt using the hot spare which stepped in, with the hot spare returning to being a spare, protecting both arrays. All that's done on the fly if possible, transparent to the user.

The hot spare disk can only rebuild RAID 1 or RAID0+1 arrays, arrays that have some degree of fault tolerance built in by default, so if a disk in your RAID0 array fails, you're still out of luck, but the capability is there.

That scenario is available across any of the native nForce4 disk controller ports, PATA or SATA, that your nForce4 mainboard supplies. You can also setup RAID arrays cross controller. RAID any combination of disks on either PATA or SATA controllers, boot from any of those arrays after installing your OS on it, reconfigure the array from Windows if you wish. NVIDIA have built it to be as flexible as possible.

Reconfiguring your arrays

Imagine you have a RAID0 array setup using two SATA ports on the controller. You start storing critical data on that array and you realise that RAID0 doesn't exactly protect your data if a disk fails. Using a RAID0 array over single disks actually doubles you chances of data loss for a given data volume. There's now two points of failure for that data volume, rather than just one. Assuming each disk in the array is just as likely as the other to fail, there's an element of risk to storing your data on a RAID0 array.

So you want to turn that into a RAID0+1 array with two more disk mirroring the RAID0 array. On most other controllers, and especially ones on the mainboard, either as part of the southbridge or a cheap external controller, you can't add the mirror disks without destroying the array. You lose your data. NVIDIA's new storage controller on nForce4 allows you to add the mirror disks and change the array type from inside the OS, without downtime and transparent to the OS and applications, keeping your data. Want to revert back to a plain RAID0 array at some point and use those two other disks for something else? That's fine too.