Why pay more for expansive SSD in your storage?



 
 

New technologies and technical breakthroughs are fascinating! In the 1980s, carbon in brake systems became a material of choice for the aerospace industry. During this period some people wanted to get the same type of technology for their car. And technically speaking it was possible, but of course, for economic reasons, the system was not developed by car companies.

Jet Plane

Today, storage performance is boosted with new solid-state drive (SSD) technology. Many people think this is a new way to boost performance. The problem, though, is the price of these expensive parts (similarly to carbon in the ’80s). So the trade-off between performance and price is not easy to find. Nobody can imagine using terabytes and terabytes of SSD in a monolithic approach because the cost would be prohibitive for a large-scale project.

One interesting approach is to reserve expensive and high-performing SSD technology for data with high-level value where performance is key. Other data with less value will be placed on other traditional disks. Many solutions exist with a manual placement of these critical data on SSD. Is an approach like this sufficient for cloud architectures where the movement is key and manual intervention is minimal?

This manual limitation can be suppressed using IBM’s automatic easy tiering feature.

IBM Easy Tier

IBM System Storage Easy Tier is a function that responds to the presence of SSDs in a storage pool that also contains hard disk drives (HDDs). The system automatically and non-disruptively moves frequently accessed data from HDD MDisks to SSD MDisks, thus placing such data in a faster tier of storage.

Easy Tier eliminates manual intervention when assigning highly active data to faster responding storage. In this dynamically tiered environment, data movement is seamless to the host application regardless of the storage tier in which the data resides.

The following video describes this feature used in the V7000 IBM storage box:

Sizing of the solution

Now assuming this movement is automatic and can dramatically improve the performance at a reasonable price, the remaining question is: How many SSDs are needed in my classical disk configuration to achieve the goal?

A rule of thumb is that somewhere between 5 and 15 percent of the total configuration would need to be SSDs, depending of the input/output (I/O) profile.

Let’s try to illustrate it with an example. In this case I used a simulator tool to evaluate a correct storage configuration for one of my customers. From his existing data and I/O profiles I was able to determine the volume and the corresponding behavior of the box. This is summarized by the following curves showing the service response time in milliseconds (ms) compared to the I/O rate stress (input/output per second, IOPS).

Service Time Diagram

I determined that an amount of roughly 144 serial-attached SCSI (SAS) disks were needed to meet the volume aspect. In this configuration with no additional SSD (blue curve), the storage is able to accept up to 25,000 IOPS with a correct response time of let’s say 5 to 7 ms. More than 25,000 IOPS would be dangerously close to the classical “avalanche” catastrophic behavior.

How can this box accept more IOPS with a plateau under 5 to 7 ms?

A classical way is to improve the number of spindles. If for example you double your number of disks (288 compared to 144 in this case), you double the number of arms and can expect to double the limit to 50,000 IOPS.

A new and interesting way is to use SSD like salt in a dish—a small quantity will help a lot!

In this example I added 6 (pink curve) or 12 (green curve) SSD disks. Using the easy tiering feature I can improve the 25,000 IOPS to 37,000 IOPS with 6 SSD, and 62,000 IOPS with 12 SSD.

In other words the improvement is:

  • With 6 additional SSDs: 37/25 = 1.5
  • With 12 additional SSDs: 62/25 = 2.5

The algorithm is not linear, so simulations need to be performed with real data to tune the configuration.

By the way, clients love this feature because we can predict with enough accuracy the behavior of their architecture. It is safer to buy a few more SSD disks for a challenging project than to double the configuration with standard disks.

Again this type of feature is really helpful to meet today’s cloud storage goals. Please ask IBM to study the menu of the Smarter Infrastructure to help you find smarter storage solutions!


Philippe Lamarche is currently an IBM Systems Architect in the hardware division (STG) since 1995, working with French industry customers and System Integrators. He has spent over 30 years at IBM in different technical positions. As a presales technical role he is a Certified IT Specialist at expert level. You can reach him on Twitter: @philip7787.

Redbooks Thought Leader

 
 
Smarter Computing Analyst Paper - HurwitzTo effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.

Subscribe to the Smarter Computing Blog

Recent Posts

Storage virtualization: What, why and how?

Shamim Hossain

The success of an IT organization today depends on an efficient and cost-effective solution, and with the proliferation and adoption of virtualization in today’s IT landscape, many people are asking three big questions about storage virtualization.

Continue reading

There is an answer to the IT budget conundrum

Ian Shave

With IT budgets under constraint, how do organizations ensure the costs of IT investments do not spiral out of control and become a burden to the business?

Continue reading

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>