To conduct the background research for this blog post, I hopped into my DeLorean time machine (what, you don’t have one?), set the time circuits for 1976 and visited the IT shops of a few major enterprises.
Aside from getting the chance to relive the US Bicentennial celebration, I wanted to check out data centers before companies like Teradata and Oracle came on the scene, and before Unix had made serious inroads into major corporations. As expected, IBM’s System/370 architecture was handling virtually all enterprise processing needs at this time. I saw big mainframes conduct online transactions all day, every day, and produce analytic reports all night. It was a very cohesive, consolidated and controlled environment.
Next, I moved forward 15 years to early 1991. Pausing briefly at a newsstand to check out the March issue of InfoWorld, in which editor Stewart Alsop famously predicted that the last mainframe would be unplugged on March 15, 1996, I reprised my data center tour.
This time I encountered a completely different scene. Mainframes were still handling the bulk of the world’s transactional operations, but now many of them were surrounded by a variety of mid-tier and personal computing systems copying their data for offline analysis. In this era, I could see how one might jump to the conclusion that the mainframe might indeed eventually be overrun by this increasingly invasive species of computers.
Having enough fuel in the flux capacitor for one more jump before returning to present time, I stopped by an IBM customer conference in the fall of 2007—being careful to avoid contact with my 2007 self so I wouldn’t unintentionally erase my own future.
Not only did mainframes survive their predicted 1996 demise, but they were thriving despite the fact that the number of real and virtual distributed systems surrounding them had grown by orders of magnitude. IBM is no different from any large company, and it too had surrounded its mainframes with large numbers of distributed systems. And, like most large companies, it was facing the same challenges and costs in maintaining them.
I came back to this specific point in 2007 because it was the first time I heard of IBM’s ambitious plan to regain control of its data centers. The session I attended introduced our customers to Project Big Green, a plan to consolidate 3,900 stand-alone, distributed servers to just 30 System z Linux virtual images. I remember this session really catching my attention because the value proposition to IBM was significant.
If I had enough fuel for one more jump before returning, I would have come back to this conference a few years later to relive a very interesting talk by IBMer Larry Yarter, who discussed an outgrowth of Project Big Green called IBM Blue Insight. The goal of Blue Insight was to shift all of IBM’s internal analytics processing from departmental servers to a centralized, software as a service (SaaS), private cloud model.
Having returned from my research runs, I phoned Larry to find out how things had progressed over the three-plus years since I heard him talk about Blue Insight at that conference. The results are nothing short of spectacular.
Larry is now an IBM Senior Technical Staff Member and the Chief Architect at what has come to be known as the Business Analytics Center of Competence (BACC). The environment that Larry described to me had the consolidated feel of 1976 that IT organizations loved, but with the freedom and flexibility demanded by business units in 2013.
Back in 2009, when Blue Insights was initiated, IBM was supporting some 175,000 users on stand-alone clients and hundreds of highly underutilized Brio/Hyperion servers. The acquisition of Brio/Hyperion software by Oracle in 2007, plus IBM’s own acquisition of Cognos that same year, meant that the company would be undergoing an inevitable and significant software shift. But rather than just converting everything from Brio to Cognos on the same inefficient server base, IBM decided to also transform its analytics capabilities to a centralized service based on a private cloud model. A private cloud deployed on System z Linux.
Now, in 2013, this model has been operational for several years.
Has it been a success? Well, you’re just going to have to stay tuned for part 2, in which I’ll share what I learned from Larry. Trust me, it’s well worth waiting for!
Paul DiMarzio has 30+ years experience with IBM focused on bringing new and emerging technologies to the mainframe. He is currently responsible for developing and executing IBM’s worldwide z Systems big data and analytics portfolio marketing strategy. You can reach Paul on Twitter: @PaulD360.
To effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.