There’s been lots of discussion in the last few days about disk storage’s tiering model, based on using solid-state storage, high-speed Fibre channel disks, and low-speed SATA disks to deliver consistent performance to different applications on a shared area, and the mention by NetApp of the death of tiering, to be replaced by a single layer of SATA disks and a large cache.
To me, a cache is a relatively small but very fast storage area which is used for temporary workloads.
However, I know a good number of people wouldn’t agree with this simple definition of a cache vs a fast tier so I’d go with something like this:
- If I can move data into the fast storage in advance of it being read from the slower disk, it’s a tier.
- If data permanently resides on the fast storage, with a copy on slower disk only used as a backup in case of hardware failure, it’s a tier.
- If the data remains in the fast storage area, even when the storage area is full, rather than being deleted, due to some kind of classification rules keeping it there, it’s a tier.
However I agree with the concept of manual “tiering” having a limited life-span, I certainly hope it goes away soon, to be replaced with policy based decisions made by the array management software.
Having a “Tier1 (Flash) -> Tier2 (15K RPM SAS/FC) -> Tier3 7.2K RPM SATA)” model doesn’t work as well in the new structures of IT delivery, where things can change on a daily/weekly basis.
Instead I think a model of “High, Medium and Low Priority” and “High, Medium and Low Reliability” which can be applied to data belonging to specific applications, and which can be changed dynamically works much better.
Simplistic examples could be:
- Production Oracle Database – High Priority, High Reliability
- Images for Sharepoint Server – Low Priority, Low Reliability
But slightly more complicated policies like this one should be equally easy to use:
- Oracle Database – High Priority during working hours (9-5 Mon-Fri), Med Priority otherwise, High Reliability 24×7
Once we’ve got these kind of policy-based management tools, the method that the array uses to achieve them become fairly irrelevant to anyone, the only thing left to work on would be the target SLAs that you’d want the array to achieve, something like:
- High Priority = 0.01ms Response time
- Medium Priority = 0.5ms Response time
- Low Priority = 5ms Response time
- High Reliability = 99.99% Data Availability
This probably isn’t going to happen very quickly, but I hope it does, and I look forward to it.