From performance to efficiency



 
 

In the past few years, I’ve witnessed high performance computing (HPC) move past the “rocket science” label and become a reality to a much larger audience. It’s no longer confined to national government research labs, nor restricted to scientific applications. It has secured a place in mainstream technical computing, accelerating traditional workloads and becoming the engine for advanced big data analytics.

But as an electrical engineer, I have to admit that I sometimes feel concerned when reading through some large HPC clusters specifications. What worries me most is that not everyone realizes how complex the relationship between energy efficiency and maximum performance has become.

Electric Meter

Understanding the big picture

It’s very easy to grasp the core concept. In order to turn a computing system on, one has to provide enough energy for it to operate at a specific load level. So, in order to operate a large-scale supercomputer at maximum performance we naturally expect an efficient data center design to cope with huge power consumption and system cooling demands.

Unfortunately the technology advancements in the energy field don’t seem to keep up with the IT demands. According to past published studies, server energy usage doubled from 2000 to 2005, and it was expected to continue to grow 15 percent per year.

An updated study was released in 2011, and although it reported a slower growth rate of data center electricity use from 2005 to 2010, due mostly to economic conditions caused by the 2008 financial crisis and improvements in virtualization technologies, total electricity use by data centers in 2010 corresponded to 1.3 percent of all electricity use for the world, and 2 percent of all electricity use for the US.

It’s worth highlighting that growth in electricity used per server accounted for a larger share of demand growth from 2005 to 2010 than it did in 2000 to 2005.

More impressive predictions came out of the “Worldwide Server Power and Cooling Expense 2006–2010 Forecast.” The report stated that the expense to power and cool the worldwide installed base of servers was projected to grow four times compared with the growth rate for new server spending.

All of those issues didn’t seem to affect the HPC market growth, as IDC just released its “Worldwide High-Performance Technical Server Overview,” reporting that revenue for high-end supercomputing increased 29.3 percent from 2011 to 2012. Those numbers consider the systems above $500,000 USD, and it states they now represent 50.9 percent of the total technical server space at a whopping $5.6 billion USD mark.

That study reveals an interesting fact. Despite the challenging global economic environment, the investments in supercomputing are still increasing. That makes perfect sense to me as governments and enterprises turn their hopes to high technology innovation and differentiation to reverse unfavorable economic scenarios in the long run.

Toward the green data center

As the market for large HPC systems grows, so does the need for energy efficient data centers. It’s not enough to just think about hardware costs when the operational costs of a multi-petaflop system can surpass the acquisition costs after a few years. I cannot stress enough the importance of understanding what total cost of ownership (TCO) really means.

Besides the pressure to improve energy efficiency there is also the urgent need to reduce carbon emissions, as most industries already face strict regulation. Chillers use up to 50 percent of data center energy, and space heating contributes to almost 30 percent of carbon footprint.

It is imperative that facilities and IT must walk the same path to better measure and manage data center operational costs.

Choose your path

Air-cooled data centers are inefficient. It’s sad but true. The sooner you realize that the faster you can reach the virtuous cycle of Smarter Computing.

The energy required by typical air cooling needs is as high as energy required to operate IT equipment in a data center. Remember: that’s translated to waste in most cases. No wonder people joke that data centers are just huge heaters with integrated logic.

Performance-Efficiency Diagram

That’s one of the main reasons we have seen the comeback of liquid cooling in recent years. Water, for instance, has much higher heat capacity and lower thermal resistance than air; therefore it is by far a more efficient choice for data center cooling. It can be applied indirectly, as in passive rear door heat exchangers, or directly at the component level. This enables the migration from an entirely air-cooled data center to a highly energy efficient hybrid-cooling solution. I recommend checking my previous post for more details on direct water cooling.

It might sound silly, but the path to energy efficiency in a data center begins by understanding two available action paths:

  • Reduce the power spent cooling the system
  • Reduce the power spent running the system

In order to traverse those paths we need to pay close attention to the technologies that can help us achieve better energy efficiency metrics. Those performance tuning points can be classified into three larger groups:

  • Energy efficient infrastructure
  • Energy efficient hardware
  • Energy aware software environment

The new generation of mainstream technical computing customers needs to shift its purchasing criteria, to consider not only system performance but also the power and thermal characteristics. There’s already a plethora of energy efficient systems, power management tools and advanced cooling technologies available. As any paradigm shift, it requires a long-term vision, but it’s really the only way forward when you aim for sustainable growth in computational power.

In my next posts I’ll go into more detail about the pillars of energy efficient computing and the importance of energy consumption metrics. Stay tuned.


Rodrigo Garcia da Silva is the Technical Computing Solutions Architect for IBM Systems and Technology Group in Brazil. He joined IBM in 2007 and has a total of 10 years of experience in the IT industry. You can find Rodrigo on Twitter: @rgarciatk and on LinkedIn

Redbooks Thought Leader

 
 
Smarter Computing Analyst Paper - HurwitzTo effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.

Subscribe to the Smarter Computing Blog

Recent Posts

Think parking in the city is expensive? Try maintaining a data center!

Paul DiMarzio

The costs of space and power—the two essential elements of any data center—can break a company’s desire to locate anywhere close to a big city if its business model relies on plentiful, affordable computing capacity.

Continue reading

In a world of instant gratification, your IT infrastructure matters

Gabi Zijderveld

In today’s world of instant gratification, real-time insight and ease and speed of transactions are king. And, by making the right infrastructure choices for data and analytics initiatives businesses can rule.

Continue reading

One Response to From performance to efficiency

  1. master says:

    THanks for the great post! keep up the good work!:D

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>