Below is part 1 of a series of 3 interviews with John Easton on his point of view on Smarter Computing.
John is an IBM Distinguished Engineer and is currently the UK and Ireland CTO for the Systems and Technology Group. John has been at IBM for 26 years, with most of his time spent working on systems and infrastructure projects for clients. Much of the early part of his career was centered on UNIX systems, AIX in particular, as well as mission critical systems like High Availability Cluster Microprocessing, HACMP (now PowerHA). This included designing and building solutions for clients, as well as working as part of the development team in Poughkeepsie.
From there, John moved toward working with new and emerging technologies, including grid computing and the Cell Broadband Engine (the chip that powers Sony’s PlayStation 3), which resulted in working with financial markets clients. Later John became the cloud technical lead for UK and Ireland across all of IBM’s lines of business.
Q1. What is your point of view on Smarter Computing, and why is it important for our clients?
A1. I think it is becoming apparent that the infrastructure our clients need for the future (as opposed to the ones they currently have) needs to be better. Clients are struggling to find skills, and organizations need to be more agile and react faster as the speed of business increases. There are also continuous cost pressures that our clients are operating under. The combination of all these factors is driving fundamental changes in the way that we deliver infrastructure to support business. [That is] why we started off with Smarter Computing consisting of infrastructure designed for data, tuned to the task and managed using cloud computing technologies. I think that actually hits a lot of those key points quite nicely.
- The cloud capability helps you with some of the skills issues because you are starting to automate things. It helps you with some cost issues because you are driving things like standardization.
- The tuned to the task capability arises because there are certain workloads that fit certain system types very well. I think it is something that we struggle with in getting our message across to clients, because they do not necessarily think in quite the same way, or they are not taking that bigger-picture, more holistic view. For example, a client will make a decision like: “These things will go on Power or these things will go on x86.” In other words, the answer to the question is predetermined. Coming at it from the point of view of having worked with some very specific bits of technology, I can give examples of how a system that is tuned to the task can do things that the generic system just cannot. But, the costs of that optimization or the cost of exploiting that particular piece of hardware are very high in many cases, which is why people do not necessarily go down the tuned to the task route.
- Then you have the designed for data theme for when we see larger and larger volumes of data being generated. You can quote whatever statistics you like for how much that is, but we are already seeing infrastructures unable to cope. I think, overall, designed for data is at the center of everything we do. It is critical today and is going to become increasingly key and even more important going forward
I mentioned an example of tuned to the task. I ran the first proof of concept of Cell [Broadband Engine] within a financial markets client in Canary Wharf. We took a piece of their . . . option pricing code that would take about 65 seconds to run on an x86 platform. Over the course of the four weeks of work, we got the 65 seconds down to about three. We could show them how they could price options 20 times faster than they could on their conventional platform by using a platform that was, if you like, tuned to the task—very good at particular work. Now, the challenge was that the tuned infrastructure demanded using skills they did not have. They had a very large code base of the existing code, and although we could demonstrate the value that having a tuned to the task system could deliver for them, it became (looking at it in the bigger total cost of ownership type discussion) expensive to go down that optimized systems route.
I think one of the reasons why we struggled was that with the optimized systems point of view, you could do some really stunning things. You can take on problems that are unsolvable with generic technology. But, the cost of doing so is often prohibitively higher. The reason why a lot of tuned to the task accelerator, hybrid type systems fail is not because they do not have the capabilities but because the cost of implementing code to use them is prohibitively high.
Stay tuned for parts 2 and 3, in which we will hear more thoughts from John Easton on Smarter Computing specifically around how security has become such an important factor to Smarter Computing, the PureSystems connection with Smarter Computing and future IT industry trends related to Smarter Computing.
Siobhan Nicholson currently works as a Client Technical Advisor for Consumer Product accounts within the UK. This includes building an understanding of client environments, identifying technical challenges, and leveraging IBM’s technical resources for the benefit of clients. You can find Siobhan on Twitter @SUN_gator and LinkedIn.
To effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.