Two years ago in the doldrums of the summer of 2011, members of what was then called IBM Deep Computing concluded that advances in technology coupled with the economic challenges facing companies across a variety of industries offered IBM a fantastic opportunity to leverage its unique position in the world of High Performance Computing to bring benefit to a broad set of new users.
We were highly confident that in a year’s time IBM would once again summit the peak known as the TOP500 with our Blue Gene/Q Sequoia system at Lawrence Livermore National Lab (LLNL) and take the #1 honors as the deliverer of the world’s fastest supercomputer. To be the leader in supercomputing is a priceless credential that was to allow IBM to become the go-to provider of systems for a vast base of users chomping at the bit to deploy high-performance computing techniques to gain competitive advantage. We dubbed this market of new users “Mainstream Technical Computing.”
Much of the market intelligence at the time pointed to a fast-growing population of private sector businesses seeking competitive advantage by getting products to market faster with lower costs and reduced risk. It was becoming common knowledge that the best of breed companies in each industry had gotten there by implementing high-performance computing techniques like in-silico modeling, Monte-Carlo simulation and big data analytics.
Suddenly, the world was on the cusp of a transformation: a huge market of private-sector, for-profit organizations demanding access to the same kinds of tools that vanguards like LLNL were using. But unlike LLNL, or Argonne National Lab, or the National Center for Atmospheric Research, or General Motors, these private-sector companies did not (nor could they afford to) employ large system administration staffs. Someone had to break down the barriers to mainstream technical computing, so these firms could implement the capabilities with minimal start-up costs and limited skills.
Being in the best position to eliminate barriers, IBM developed a strategy to accomplish four things:
- Inform potential users of Mainstream Technical Computing that IBM was their best choice with whom to partner
- Develop a portfolio of products and services tailored for the Mainstream Technical Computing client
- Build out a seller channel to identify, own and fulfill opportunity
- Significantly upgrade our software offerings to form a foundation for making systems easier to manage, better utilized and more efficient.
The Technical Computing team, with help from our colleagues in the brands and IMTs, has made solid progress on each of these strategy elements and we enter the second half of 2013 in good position to expand our market share leadership even further.
Today, as IBM Systems & Technology Group announces a new line-up of products and services, the Technical Computing contribution is as strong as ever. We’re leading with new “Application Ready” reference architectures for workloads and applications in a variety of industries. Each of these reference architectures includes recommended small, medium and large configurations designed to ensure optimal performance at entry-level prices. These reference architectures are based on powerful, predefined and tested infrastructure, and have been certified to run popular technical computing applications from these Independent Software Vendors: Accelrys (Life Sciences), ANSYS (Automotive/Aerospace), CLC bio (Life Sciences), Gaussian (Chemistry), and Schlumberger (Petroleum).
The Application Ready configurations benefit immensely from the inclusion of Platform Computing software to simplify system set up and management, and increase performance over what would otherwise be delivered by open-source codes. It’s important to note that it was the acquisition of Platform Computing last year that formed the cornerstone of our new strategy to serve and lead in Mainstream Technical Computing, and the move has paid off with a vastly improved product portfolio and access to a large customer base. Platform Computing has also benefited through IBM’s worldwide presence.
When we decided to set our sights beyond the world of Supercomputing with the intent of leading the way in Mainstream Technical Computing, we knew it would be a multi-year journey. We would have to convince clients that IBM wanted to serve them, and that we had the right products for them. We would have to rethink how we go to market, and how to identify opportunity. And our software foundation would have to be immediately fortified. On this first anniversary of IBM’s public announcement of our new agenda for Technical Computing, I believe we’ve managed the journey well, and more importantly, our new clients believe it too.
Herbert Schultz is the marketing manager for IBM’s Technical Computing and analytics, and is responsible for HPC segment management and key product offerings, including Blue Gene, DCV and DCCOD. He joined IBM in 1979 as a systems programmer and has held many management positions. You can reach him on Twitter: @ibmhpc.
To effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.