Elements of the data center that were once in the background or serving small niches of the market have come to the forefront. Why? Because the data center is what not only manages the constant surge of structured and unstructured data, it is what turns that data into insights so companies can figure out what to do with it all. This is requiring companies and the IT vendors that support them to completely rethink what IT infrastructure should look like, from storage to systems.
For example, high performance computing (HPC) — or the use of parallel processing to run advanced application programs — was once thought to be the domain of universities and research labs, but HPC is extending into technical computing as it is increasingly being used by mainstream enterprises for workloads that require a lot of computing power and data such as simulations, computer modeling and analytics.
IBM acquired Platform Computing earlier this year and is using its software to create integrated solutions that bring technical computing to businesses with a need for increased computing power and analytics. Financial services, manufacturing, digital media, oil and gas and life sciences are some of the primary industries adopting technical computing.
On June 18, IBM BlueGene/Q captured three out of four top spots on the TOP500 list of the world’s fatest supercomputers, including the first place Sequoia system at Lawrence Livermore National Lab (LLNL). IBM remains committed to driving supercomputing innovations forward. At the same time, we are extending our expertise in technical computing to a broader set of industries and enterprises.
Last Wednesday, IBM and LLNL announced a collaboration to help industrial partners use HPC to boost their competitiveness in the global economy. IBM will make researchers available to work hand-in-hand with businesses and organizations on specific projects in such areas as improving our electric grid, advancing manufacturing, discovering new materials and leveraging Big Data. This new collaboration will address a need for increased access to supercomputers expressed by businesses and government institutions that need quickly process very large data sets.
As supercomputers improve, they become dramatically better in terms of affordability, performance and size. Sequoia enables a company or university to acquire a petaflop computer in just five racks. In contrast, if you used the technology in the second fastest computer in the world, it would take approximately 80 racks to create a one petaflop system. The difference in cost, size and power consumption is something that a company can apply for immediate competitive advantage.
Even companies that aren’t interested in access to technical or high performance computing can benefit from the innovations taking place at this level. For example, the same POWER microprocessor architecture that provides the muscle behind Sequoia also drives many of the world’s enterprise computing systems. In addition, analytics techniques use HPC techniques like simulations, except that it is applied to business data. So as HPC improves, so too will business analytics.
To effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.