facebook rss twitter

Review: Intel Hammer 24-drive SSD storage server

by Tarinder Sandhu on 29 July 2013, 09:15

Tags: Intel (NASDAQ:INTC), Kingston, Hammer

Quick Link: HEXUS.net/qaby3z

Add to My Vault: x

A peek at performance, thoughts

Bear in mind that using 24 drives powered by 16-thread CPUs makes little sense when a single worker is providing minimal load to the server. The point here is that a 2U box like this can serve many more requests than a home PC run with a single drive. It is outside the remit of this technical evaluation to examine heavily-loaded performance, particularly as we have little else to compare it to, so let's focus on straight-line speed.

If you're looking for silly-high numbers from the off, ATTO provides 4GB/s writes when using a small-sized file. Tweaking the settings enables read and writes to exceed 3GB/s on larger transfers. SiSoft SANDRA, similarly, shows stupendous potential DRAM-like performance.

We can also take a look at industry-standard Iometer performance based on preset parameters for database, file server and workstation profiles.

Simulating a possible real-world scenario, our database workload consists of random 8KB transfers in a 67 per cent read/33 per cent write distribution. A single DC S3700's performance is dwarfed by the 24-drive server. Impressive as 300,000 IOPS are, we had expected it to be higher. Perhaps Windows Server 2008 R2 and the LSI/Intel controllers aren't as good bedfellows as they could be.

The file server workload consists of mixed-size file transfers. There's a hefty increase in performance - evidenced by what appears to be 'flatline' results for a single drive - with the 24-drive's IOPS consistently hitting 250,000 when queue depth is increased.

More of the same in the workstation test. High IOPS means excellent responsiveness to a large number of requests. We believe it's possible to obtain higher numbers than those shown by adjusting the RAID setup and using a different operating system.

Thoughts

Our first brief look at a storage server shows just how much capacity can be crammed into a 2U rack-mounted chassis. The 800GB Intel DC S3700 SSD may seem expensive on first glance, at £1,500 a pop, but it's considerably cheaper than, say, the SAS drives that regularly populate this space.

The investigation into performance of high-capacity servers is an interesting and relevant area, more so as the demands of cloud computing and storage increase. We'll be taking a closer look at subsequent servers using different SSDs and operating systems - variations of Linux - in the near future. For now, we'd like to thank Hammer, Intel and Kingston for showcasing how the Intel DC S3700 SSD fits into the enterprise space.



HEXUS Forums :: 14 Comments

Login with Forum Account

Don't have an account? Register today!
just can't quite justify this for my house, but I really want to.

Would have been interesting to see the RAID5/6/10/50 configuration performance, but it seems that time was not permitting on this occasion. Of course I would also like to see what this does when hooked up to an apple based system for video editing, or a *nix system as a db…
andykillen
just can't quite justify this for my house, but I really want to.
Can't quite justify £45,000 on a computer for you're house?!? That's half the value of my house!
I'd rather have a whiptail :)
Typo on the first page, says Server 2008 RC2 instead of R2.

Shame more tests weren't done with this, really, it does seem like a silly amount of performance.
I'd take a guess 300,000 IOPS is limited by the RAID cards and system rather than the drives - as with many things 24x drives is never going to be anywhere near 24x the performance, much is lost in the RAID processing overheads. RAID 0 was a totally useless configuration to test when the underlying disks are so fast and it's interface to the world is only 4x 1Gb NICs (i.e. 500MB/s - a couple of SSDs could saturate that nevermind 24 of them). Almost nobody would actually seriously use the array in that configuration…

I'd be tempted to configure a RAID 50/60 if supported with 2-4 hot spares, would be nice to have seen the performance of that. Whereas in a 24 disk array with spinning disks I'd have to use 600-900GB 10/15K disks in RAID 10 to get higher performance I might be able to use 400GB SSDs in a RAID 50 and still get better performance with closer drive prices.

24x 600GB RAID 10 excluding 2 hotspares = ~6.6TB usable (600GB 15K drives are ~£300 each)
24x 400GB RAID 50 excluding 4 hotspares = ~7.2TB usable (~£775 each for SC700 400GB)

So 2.5x the cost per drive rather than 5x (for the 800GB) creates similar capacity and probably still much faster and maybe even still controller/NIC limited - which would have been a great test to run.

However performance aside this unit isn't really enterprise grade in my opinion, not for any critical front-line “live” role anyway (e.g. VMWare shared storage), it's very disappointing to see the OS drive is a single point of failure and non-hotswap as well is unforgivable. A proper SAN like an Equalogic or HP Px000 is in the same price range and has redundant hot-swap controllers, more network interfaces (maybe 10Gb) and a tightly focussed embedded OS rather than Windows.