Thoughts on Iomega IX4-200d performance tests

There’s been an excellent blog post overnight on the performance of the Iomega IX4-200d disk array, one of the cheapest (if not the cheapest) VMware certified iSCSI capable disk arrays available.

I’m a big fan of the Iomega IX4-200d and I’ve seem them used to good effect in various situations, so I was interested to see what happens when you push it to the edge of performance with the iSCSI functionality.

Executive Summary – The IX4-200d is still an excellent NAS device for SMB’s, but these tests suggest that when the workloads are highly random and the box is pushed to the limit, rather than handling the situation gracefully it seems to slow down to a crawl. The problem may be configuration, iSCSI, RAID5 or firmware related, we won’t be able to tell without more tests.

After reading through the post, I had a few questions about how close the IX4-200d was running to the limit of a 4 disk SATA array so went off to figure them out, using the figures from Gabes Virtual World post and this recent Yellow Bricks post of RAID impact on disk IOs which saved me from any hard maths.

Gabe helpfully listed out the disks used (Seagate Barracuda 7200.11 1TB 7200RPM drives ST3100520AS), that write cache was enabled, the server is connected via iSCSI, and all 4 disks were in a RAID5 array.

I’ve taken a quick look on the Seagate site, and while they don’t list that model number, the Barracuda 7200.11 is listed in general, and I’d expect around 75 IOPS per disk based on their own specifications, which is fairly typical for a 7200 RPM SATA drive. Update – Gabe’s let me know that the model number was wrong, the correct one is ST3100520AS which is a 5400 RPM drive, so 50 IOPS is more likely).

I had 2 questions about the IX4-200d performance – is the caching working, and is RAID5 impacting performance of the box to such an extent that you’d only want to run in RAID10?

Gabe ran 4 initial IOmeter tests, which gave me the bulk of the information I wanted.

Test 001a covers 100% sequential read access of the drives, in theory telling us how fast the array can possibly run. The result of 55MB/sec isn’t great, but IOPS of 1761 is extremely high – given that the drives themselves can only deliver around 75-100 IOPS per second, 1761 is obviously a sign that the read cache is doing it’s job. As I say though, 55MB/sec isn’t great, a single Seagate Barracude 7200.11 would be expected to return more than that when plugged into a drive, indicating there’s some kind of limiting factor outside the disks, either the iSCSI implementation or something else, possibly network related like a slow switch being used.

Test 001b is 65% read, 35% write, some sequential some random, or the “real-life” test. The MB/sec result falls through the floor here, down to just 0.69MB/sec indicating something is up – either the write cache isn’t turned on, isn’t working, or the sheer load of IO’s being generated by IOmeter is causing the box to essentially collapse – I’d be interested to see this test re-run with the volume of IOPS ramping up slowly overtime so we can see whether this is the case. Using the figures from Yellow Brick’s RAID overhead post, 89 IOPS at 35% write turns into around 60 physical read IOPS on the disks, and 100 write IOPS because of the RAID5 overhead. 25 writes per second per disk isn’t too bad for a SATA drive, but it’s not good either. This result definitely suggests something isn’t working right on the IX4-200d for some workloads.

Test 001c is 50% read, 50% write, but is all sequential unlike Test 001b, so this should clarify is the issue is write performance, overloading of IOPS, or random vs sequential workloads causing the slow down. The result of 22MB/sec and 705 IOPS is massively improved over test 001b, which does suggest it’s the “random” workload that causes the IX4-200d to slow right down. The caching obviously works much better for sequential access, which isn’t unexpected, though the impact of it is a little.  705 IOPS is again definitely higher than I’d expect the 4 SATA drives to return, so the caching is working well. 22MB/sec for test 001c compared to 55MB/sec for test 001a do imply that sequential writes happen at a much lower speed than reads (which Gabe does cover in a later test, the “Super ATTO Clone pattern”).

Test 001d is the final IOmeter test, this time 70% reads, 30% writes, 100% random. Given my earlier comments on test 001b, I’d expect these results to be even worse, and so it seems – 0.5MB/sec and 64 IOPS does suggest that with random workloads the IX4-200d simply isn’t working, the average IO response time rises to 913ms and the maximum IO response time hits 12127ms. These figures simply aren’t workable, and suggests there’s something up with the IX4-200d under high volume random workloads – high volume sequential loads like test 001c have produced maximum response times of 252ms for higher write performance levels.

To skip a couple of tests in Gabe’s testing, we finally come to the “Super ATTO Clone pattern”, which attempts to discover the maximum performance achievable by a disk, by varying block sizes while performing reads and writes. The optimal figures produced are 41MB/sec read and 9.7MB/sec write at high (64K> block sizes), but the 8K block size results of 34MB/sec read and 9.2MB/sec write are very respectable, and what I’d expect the IX4-200d to be delivering.

In conclusion, to me it seems that they’re something broken with the IX4-200d in iSCSI mode with RAID5 and highly random workloads. Gabe is going to re-run his tests in NFS mode and see what difference that makes, but I’d also like to see the same tests run in RAID10 mode to see if it’s RAID5 that’s causing the issue – with 2TB drives available, RAID10 would still give you 4TB of usable disk space on the IX4-200d.

The Iomega IX4-200d is still an excellent NAS device, but these tests have made me reconsider where it could be used. It might be that NFS or RAID10 works much better, but otherwise it suggests you’re probably best not using the IX4-200d for highly random workloads.

Update 31/12/2009 – over at blog.storming there’s a follow-up post running similar benchmarks with SSDs instead of SATA drives with more interesting results