Editor’s Note: We continue our Smarter Computing Breakthrough series this week with a post on Image Management from Keith Smith, Senior Technical Staff Member (STSM)
and WebSphere Virtual Enterprise Chief Architect. The Smarter Computing Breakthroughs series will help introduce you to the key technological breakthroughs that are the unsung heroes of the IT infrastructure that enables a Smarter Planet. You can find links to previous Breakthroughs posts at the bottom of this post.
Virtualized infrastructures, and particularly private clouds, have really gained momentum in enterprise IT as optimized service delivery platforms. And that reputation is well deserved… given the role that virtual image management plays.
Why is image management so critical? Just consider the fact that for a virtualized infrastructure such as a cloud, images are really fundamental building blocks — logical pieces out of which services are created and scaled to suit customer demand and fulfill business strategies.
It follows that the more efficiently and effectively virtual images are managed, at every stage in their lifecycles, the bigger the payoff virtualization will generate, regardless of what sort of virtualized infrastructure you have.
Many organizations, however, have found that in shifting to virtualization, to get new benefits, they are also running into new challenges. Images tend to proliferate, for instance — making it hard to know which ones should be used for which purposes, what they include and how best to combine them to fulfill complex workloads.
Simply creating virtual servers and populating them with images — however quickly that may happen — really isn’t good enough. For business applications to deliver their full potential, another level of capabilities is needed, to ensure that the complete environments in which applications live and breathe are really as perfectly tailored for those applications, and the unpredictable demand for them, as possible.
Contextual insight is key to performance management
Let’s get a little more specific.
Because application usage constantly fluctuates, it’s important for a cloud (or any virtualized infrastructure, really) to be able to assess what’s really going on inside a virtual machine (VM). For instance, what is an application’s peak utilization? What percentage of the time is it hitting that peak, and what resources does it require at that point? These are difficult questions for many organizations to answer. But without knowing the answers, it’s very difficult to make sure applications are really paying off in the intended manner.
Then there are related issues like the operational context, what kind of action is needed as a result and how best to take that action. Suppose that a given IT service depends on a particular application running in a dozen different virtual servers. Suppose also that the service’s total performance has declined. How hard (or easy) is it for IT to figure out which VM is the problem child, then intelligently stop and restart the problematic VM to bring service levels back to what they should be?
You can see that performance management of the kind organizations need today means being able not only to recognize such problems and respond to them effectively, but also to do that automatically — based on policies that execute far more quickly than humans could and far more consistently, thus minimizing the total business impact of any given technical problem.
That, in sum, is why virtual image management is so important — because it plays a crucial role in how well the infrastructure is really going to be able to hit performance targets while also keeping costs and risks under control.
Bake agility and performance deep into the cloud
How might that happen? Here’s a plausible scenario — and one that many IT professionals will find familiar.
Imagine that your organization wants to add a new workload to a private cloud architecture. This workload is going to involve core technologies like a Java application server, a process server, an enterprise-grade database and a management portal. Now, your virtual image library already has virtual images that cover all of those technologies.
What you need is a super-efficient, policy-driven way to combine and deploy those images logically, to make that workload happen, then oversee the new app infrastructure afterward (i.e., the added layer of management capabilities I mentioned earlier in this piece).
Given that, you’d really be baking performance management into the way your private cloud works at a deep level. That’s because entire application environments — including all the various tech needed to make those applications go — would be easy to create, and re-create, as many times as you need to support performance goals.
Also, these environments would be perfectly consistent from one to the next, eliminating the unfortunate consequences that can sometimes arise through a manual implementation. And, because automation is driving the whole process, environment creation would be remarkably fast, too — so fast that it would likely enhance business agility as a whole.
Then, once the environment was deployed? You’d be able to manage it smartly — through insight into what’s really happening inside all those VMs — in order to do things like:
- Reduce operational costs and resource waste by ensuring that applications get the resources they need — but not more than they need
- Increase application and cloud elasticity, scaling services on an as-needed basis to handle unpredictable demand levels
- Upgrade applications without having to bring them down first — continuous uptime is better for both the customer/user experience and the business bottom line
- Make the entire application infrastructure generally more resilient, translating into higher service availability.
IBM solutions enhance application performance through smart image management
This, in short, is the logic behind IBM WebSphere Virtual Enterprise and IBM Workload Deployer — and the reason why IBM now leads the entire IT industry in the field of application management for virtualized infrastructures.
Competitive offerings simply don’t provide the detailed insight into how VMs are operating that IT team members (or policy-driven management tools) need. As a result, adjusting resource allocation is neither as instant nor as accurate as it needs to be to ensure applications really fulfill their maximum potential.
The IBM offerings, in contrast, allow you to both create, and manage, complex virtual environments for applications, automatically — based on predefined policies — and in a context-aware fashion, so that you get more business value and greater ROI from those applications.
Read our previous Breakthroughs posts at the links below:
- Middleware Optimized Systems (6/4/12)
- Information Integration, Pt. 1 (6/12/12)
- Information Integration, Pt. 2 (6/18/12)
- Unified Management (6/26/12)
- Data Security (7/10/12)
How are YOU transforming your IT efforts for efficiency and more impact? Let us know! Leave a comment on the Smarter Computing blog below or connect with us on Facebook, Google+ or Twitter. If you tweet, be sure to include the #TransformITnow hashtag.
To effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.