Optimize your IT infrastructure with software defined storage


One of the primary goals of a software defined environment (SDE) is to make data center infrastructure cloud ready. In geek speak, SDE abstracts compute, networking and storage resources in the data center as “services on demand” and makes them available to applications through a set of well-defined APIs. In doing so, SDE enables applications to provision, configure and consume data center resources through programmatic interfaces. These capabilities drive automation of data center processes, increased efficiency and business velocity and are at the core of what makes cloud computing attractive.

Software defined storage (SDS) is an integral part of SDE. SDS redefines the storage infrastructure in the data center and enables applications to manage storage resources through a programmatic API obviating the need for time consuming manual operations. In order to better understand how SDS works, let’s take a look at today’s storage infrastructure from the point of view of the applications that use them.


Today’s storage setups consist of diverse storage systems that must be manually provisioned by system administrators according to the requirements of applications. These are configured in such a way that it caters to the application’s performance needs, re-provisioned as the application requirements change and eventually de-provisioned when there is no longer a need for the information. As storage requirements changes both in terms of capacity and performance, administrators must reconfigure storage to cater to the changing needs by moving data back and forth across different types of storage, provisioning storage across multiple systems and allocating faster storage types like Flash.

The complexity assumes an extra dimension because applications can now move around (with VMotion, for example) the data center as they are provisioned and re-provisioned. Manually catering to these requirements and statically configuring storage to cater to these requirements is extremely complex, error-prone and therefore risky. Needless to say, administration costs escalate non-linearly.

While SDS is still in its infancy in terms of definition and deployment, the first steps are quite clear (and take the pain away from the following storage use cases by means of API definitions):

  1. Provisioning and configuring storage according to the needs of applications (like redundancy, performance, security and so on)
  2. Reconfiguration of storage according to the changing needs of applications and enabling applications to move around within the data center
  3. De-provision when the need arises while ensuring that the safety of data on de-provisioned storage is maintained

These APIs must cater to the needs of integrated software defined environments that are built around cloud operating systems like OpenStack. Existing storage systems do provide a lot of these functions. Therefore the first step is to ensure that these functions can be integrated and packaged as APIs that can be consumed by use cases provided by SDE stacks.

As SDS matures, these APIs would grow into more sophisticated mechanisms that enable applications to dynamically convey their changing needs to storage systems. It is also likely that more functions would start getting pushed down into storage systems that are done today by hosts. All of this aimed at making the job of managing storage systems much simpler and less expensive than today.


At IBM, we are a big proponent of SDE and SDS. IBM is a primary sponsor of and a major contributor to OpenStack. Many of our storage products like SAN Volume Controller (SVC), Storwize and XIV are SDS capable. We are enhancing SDS to include advanced functions like real-time compression, flash optimization, snapshot management and so on. Our ultimate objective is to make SDS an open and extensible platform so that applications can not only manage them through programmatic APIs but can also extend their capabilities by adding third-party ISV software.

What do you think about software defined storage now and how it will mature?

Dr. Debanjan Saha is an innovator and entrepreneur with 20+ years of experience in Storage and Networking industries. He is currently Director of Development for Storage Software and Solutions in IBM’s Systems and Technology Group. In this role he leads a world-wide team of engineers in the US, UK, Germany, China and India and is responsible for development strategy and execution for Software Defined Storage, including Storwize family of products, file and object storage solutions.

Smarter Computing Analyst Paper - HurwitzTo effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.

Subscribe to the Smarter Computing Blog

Recent Posts

Creating a fast-track for the hybrid cloud

Setareh Mehrabanzad

Last month, IBM Systems unveiled new solutions for creating an agile hybrid cloud architecture by enabling VMware’s vRealize Automation Platform for IBM Power Systems and IBM z Systems. Today, IBM Systems and VMware are introducing expanded capabilities this week at VMworld 2015 Europe in Barcelona.

Continue reading

Introducing the all new Power Systems LC Line of servers

Doug Balog

IBM has furthered its commitment to powerful and cognitive systems of insight, unveiling a whole new Linux class of IBM Power Systems designed for clusters and clouds: the LC Line of servers. The all new LC Line of Power Systems represents a different way to experience Power Systems.

Continue reading

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>