Posted by: Electric Thoughts™ | March 14, 2012

More Cooling

More Cooling With Less $$

My last post took a look at the maintenance savings possible through more efficient data center/facility cooling management.  You can gain further savings by increasing the capacity of your existing air handling/ air conditioning units.  It is even possible to add IT load without requiring new air conditioners or at the least, deferring those purchases.  Here’s how.

Data centers and buildings have naturally occurring air stratification.  Many facilities deliver cool air from an under floor plenum.  As the air heats and rises, cooling air is delivered low and moved about with low velocity.  Because server racks sit on the floor, they sit in a colder area on average.  The air conditioners however, draw from higher in the room – capturing the hot air from above and delivering it, once cooled down, to the under floor plenum. This vertical stratification creates an opportunity to deliver cooler air to servers and at the same time increase cooling capacity by drawing return air from higher in the room.

However, this isn’t easy to achieve.  The problem is that uncoordinated or decentralized control of air conditioners often causes some of the units to deliver uncooled air into the under floor plenum. There, the mixing of cooled and uncooled air results in higher inlet air temperatures of servers, and ultimately lower return-air temperatures, which reduces the capacity of the cooling equipment.

A cooling management system can establish a colder profile at the bottom of the rack and make sure that each air conditioner is actually having a cooling effect, versus working ineffectively and actually increasing heat through its operation. An intelligent cooling energy management system dynamically right-sizes air conditioning unit capacity loads, coordinating their combined operation so that all the units deliver cool air and don’t mix hot return air from some units with cold air from other units. This unit-by-unit but combined coordination squeezes the maximum efficiency out of all available units so that, even at full load, inefficiency due to mixing is avoided and significant capacity-improving benefits are gained.

Consider this example.  At one company, their 40,000 sq. foot data center appeared to be out of cooling capacity.  After deploying an intelligent energy management system, not only did energy usage drop, but the company was able to increase its data center IT load by 40% without adding additional air conditioners and, in fact,  after de-commissioning two existing units. As well, the energy management system maintained proper, desired inlet air temperatures under this higher load condition.

Consider going smarter before moving to an additional equipment purchase decision.  Savings become even larger if you consider avoided maintenance costs for new equipment, and energy reduction through more efficiently balanced capacity loads, year-over-year.

Posted by: Electric Thoughts™ | January 4, 2012

2011 Reflections

There is a saying in the MEP consulting business: “no one ever gets sued for oversizing.” That fear-driven mentality also affects the operation of mechanical systems in data centers, which accounts for why data centers are over-cooled at great expense.  But few facility managers know by how much.  The fact is that it has been easier – and to date –safer to over-cool a data center as the importance of the data it contains has increased and with that importance, the pressure to protect it.

Last year that changed.  With new technology, facility managers know exactly how much cooling is required in a data center, at any given time. And, perhaps more importantly, technology can provide warning – and reaction time – in the rare instances when temperatures increase unexpectedly. With this technology, data center cooling can now be “dynamically right-sized.”  The risk of dynamic management can be made lower than manual operation, which is prone to human error.

In our own nod to the advantages of this technology, we re-named the company I co-founded in 2004, from Federspiel Corporation to Vigilent Corporation.  As our technology increased in sophistication, we felt that our new name, denoting vigilance and intelligent oversight of facility heating and cooling operations, was more reflective of the new reality in data center cooling management.   Last year, through smart, automated management of data center energy consumption, Vigilent reduced carbon emissions and energy consumption of cooling systems by 30-40%.  These savings will continue year after year, benefiting not only those companies’ bottom line, but also their corporate sustainability objectives.   These savings have been accomplished while maintaining the integrity and desired temperatures of data centers of all sizes and configurations in North America, Canada and Japan.

I’m proud of what we have achieved last year.  And I’m proud of those companies who have stepped up to embrace technology that can replace fear with certainty, and waste with efficiency.

Posted by: Electric Thoughts™ | November 15, 2011

Unexpected Savings

Data Center Cooling Systems Return
Unexpected Maintenance Cost Savings

Advanced cooling management in critical facilities such as
data centers and telecom central offices can save tons of energy (pun
intended). Using advanced cooling management to achieve always-ready,
inlet-temperature-controlled operation, versus the typical always-on,
always-cold approach yields huge energy savings.

But energy savings isn’t the only benefit of advanced cooling management. NTT America recently took a hard look at some of the
direct, non-energy savings of an advanced cooling system. They quantified
savings from reduced maintenance costs, increased cooling capacity from
existing resources, improved thermal management and deferred capital
expenditures. Their analysis found that the non-energy benefits increased the total dollar savings by one-third.

Consider first the broader advantages of reduced maintenance costs. Advanced cooling management identifies when CRACs are operating
inefficiently. Turning off equipment that doesn’t need to be on reduces wear and tear. Equipment that isn’t running isn’t wearing out. Reducing wear and tear reduces the chance of an unexpected failure, which is always something to avoid in a mission-critical facility. One counter-intuitive result of turning off lightly provisioned CRACs is that inlet air temperatures are reduced by a few degrees. Reducing inlet air temperature also reduces the risk of IT equipment failure and increases the ride-through time in the event of a cooling system failure.

The maintenance and operations cost savings of advanced cooling
management is significant, but avoiding downtime is priceless.

Posted by: Electric Thoughts™ | September 29, 2011

Cooling Tips

Ten Tips For Cooling Your Data Center

Even as data centers grow in size and complexity, there are still relatively simple and straightforward ways to reduce data center energy costs. And, if you are looking at an overall energy cost reduction plan, it makes sense to start with cooling costs as they likely comprise at least 50% of your data center energy spend.  Start with the assumption that your data center is over-cooled and consider the following:

Turn Off Redundant Cooling Units.  You know you have them, figure out which are truly unnecessary and turn them off. Of course, this can be tricky. See my previous blog on Data Center Energy Savings.

Raise Your Temperature Setting. You can stay within ASHRAE limits and likely raise the temperature a degree or two.

Turn Off Your Humidity Controls. Unless you really need them, and most data centers do not.

Use Variable Speed Drives but don’t run them all at 100% (which ruins their purpose). These are one of the biggest energy efficiency drives in a data center.

Use Plug Fans for CRAH Units. They have twice the efficiency and they distribute air more effectively.

Use Economizers.  Take advantage of outside air when you can.

Use An Automated Cooling Management System. Remove the guesswork.

Use Hot and Cold Aisle Arrangements. Don’t blow hot exhaust air from some servers into the inlets of other servers.

Use Containment. Reduce air mixing within a single space.

Remove Obstructions. This sounds simple, but  a poorly placed cart can create a hot spot. Check every day.

Here’s an example of the effect use of an automated cooling management system can provide.

The first section shows a benchmark of the data center energy consumption prior to automated cooling. The second section shows energy consumption after the automated cooling system was turned on. The third section shows consumption when the system was turned off and manual control was resumed, and the fourth section shows consumption with fully automated control. Notice that energy savings during manual control were nearly completely eroded in less than a month, but resumed immediately after resuming automatic control.

Posted by: Electric Thoughts™ | August 2, 2011

Occam’s Razor

Data Center Energy Savings

The simplest approach to data center energy savings might suggest that a facility manager’s best option is to turn off a few air conditioners.  And there’s truth to this.  See the graph below, showing before and after energy usage, and the impact of turning off some of the cooling units.

Before & After Energy Management Software Started

But the simplicity suggested here is deceptive.

Which air conditioners?

How many?

How will this truly affect the temperature?

What’s the risk to uptime or ridethrough?

While turning things off or down is likely our greatest opportunity for significant, immediate savings, the science driving the decision of which device to turn off and when, is complex and dynamic.

Fortunately, a convergence of new technology – wireless sensors for continuous, real-time and location-specific data, along with predictive, adaptive software algorithms that take into account all immediate and known variables at any given moment – can predict the future impact of energy management decisions, taking on/off decision-making to a new level.  Now, for the first time, it’s possible – thanks to the latest AI technology – to automatically, constantly and dynamically manage cooling resources to reduce average temperatures across a facility and avoid hot and cold spots of localized temperature extremes. Simultaneously, overall cooling energy consumption is reduced by intelligently turning down, or off, the right CRACs at the right time. The result is continually optimized cooling with greater assurance that the overall integrity of the data center is preserved.

Posted by: Electric Thoughts™ | June 16, 2011

Bay Area Talks

Data Center Energy Management Presentations

Heads up on a couple of my upcoming presentations in the Bay Area on June 21!

TiE Silicon Valley

The Silicon Valley TiE office is hosting a panel discussion on energy alternatives for data center management.   I’ll join four executives from the industry to discuss and debate:

  • Load management
  • Replacement of existing infrastructure
  • Cooling Management

We’ll discuss trends, tradeoffs and the effects of options in these areas.

For details, click here:

Stop by and say hello!

Environmental Defense Fund Climate Corps Program

The EDF sponsors a terrific program to place MBA fellows at large companies to collect energy data, analyze accordingly and provide
recommendations.  I will provide a private webinar to these interns on energy efficiency strategies for data centers.

Posted by: Electric Thoughts™ | June 7, 2011

Hard Choices

Between An (Energy) Rock and A Hard Place

Just as concerns about global warming and carbon emissions were about to create a rejuvenation of nuclear power, the Tohoku earthquake and tsunami rocked the shores of Japan, and with it, the latest nuclear renaissance.

This recent disaster in Japan has shown us that we are increasingly stuck between a rock (coal) and hard place (nuclear fuel rod) when it comes to energy sources. If we’re going to solve this conundrum, it’s time to take another look at the cost comparison between generating energy and conserving energy, or negawatts. A negawatt is the opposite of megawatt – a million watts of power avoided versus a million watts of power used – a powerful concept with demonstrated ROI, that even the Federal Energy Regulatory Commission (FERC)* has now recognized.

Consider the following cost estimates for new power generation from the most common sources, drawn from information supplied by the Energy Information Agency (, which cites “overnight” costs. An overnight cost is the physical construction cost of the plan, divided by capacity, and does not include land costs, financing cost or any other related cost.

Nuclear: Even before the disaster in Japan this year, the EIA estimates overnight cost of a nuclear plant at $5,335/kW.

Coal: The same EIA report lists the overnight cost of a coal-fired plant as $2,884/kW without carbon capture and $5,388/kW for integrated gasification combined cycle with carbon capture.

Natural Gas: The EIA lists the overnight cost of a natural gas plant at $978/kW for a conventional combined cycle plant, and $2,060/kW for advanced combined cycle with carbon capture (NGCC with CSSS).

Conservation: I estimate that an energy efficiency project with a simple payback of three years under an electric rate of $0.0982/kWh (the average retail price in the U.S. according to EIA; has an equivalent “overnight cost” of no more than $2,581/kW. This figure assumes that the energy savings accrue over all 8760 hours of the year. If the savings accrue over fewer hours, then the equivalent “overnight cost” would be lower because the kilowatt reduction would be higher. The calculation is simply the electric rate times the hours per year of operation, times the simple payback period in years.

Conservation with a moderate payback period is roughly the same cost as the cheapest clean generation technology (NGCC with CCS) and, since many conservation projects have a much faster payback than three years; this leads me to the following proposal:
“As a society, we should stop funding the construction of any power plants in the U.S. until we have engaged and exhausted all conservation projects that have a payback of less than three years. We should finance the overnight costs of these conservation projects via a savings charge on individual energy consumption. Each end-user’s savings charges would go into that end-user’s energy savings account, which they could use for their own energy conservation projects. If the funds aren’t applied in a reasonable amount of time (e.g. three years), the end user loses them. The savings charge rate, as a fraction of a utility bill, should be set so that societal savings charges equal the current rate of spending on new power plants.”

In my view, this will force a more through analysis and comparison of the effects of conservation versus construction, and lead to a more efficient use of capital in energy generation. At the same time, it will have the important effect of contributing to a reduction in greenhouse gasses and spent nuclear fuel.

In the coming months and years, the Japanese will undoubtedly teach us a great deal about how effectively aggressive conservation can help to quickly offset the loss of power generation capacity. And I suspect that what we will learn will be consistent with this proposal.


Posted by: Electric Thoughts™ | April 4, 2011

Avoiding Risk

Avoiding Risk in Data Centers Sometimes Means Counter-Intuitive Thinking

Sound data center risk mitigation practices can also lead to energy cost savings. But sometimes the route there is counter-intuitive.

Always-on, always-cold is still a commonly-used strategy for data center cooling operations – and for good reason. This type of operation is fairly easy to implement and monitor, and running all the CRACs all the time logically reduces the risk of downtime should a unit fail.  In this operating strategy, the CRACs are run at a low set point.   They operate at  lower than required temperature to mitigate the risk of hot spots or to add ride-through time in the event of a cooling system failure.

While this seems a logical and prudent practice, if you dig a little deeper, you’ll see that it’s not quite as risk adverse as it initially appears – and, more importantly, it misses a larger opportunity for significant energy cost savings. Let’s examine each practice individually.

Continuous operation of all CRACs, including redundant (backup) CRACs, wears all units out prematurely. Increased runtime for any piece of equipment that wears out with use naturally reduces its lifecycle.

Leveling CRAC runtimes, in which each CRAC is set to run approximately the same number of hours, has the same issue. This practice might extend the time to first failure; however it also increases the risk of catastrophic failure (i.e. simultaneous failure of all units).

And then there’s the issue of low setpoint thresholds.  Common thinking regarding cold operations is that an overall cooler temperature will use the thermal mass of the infrastructure to provide extra time to react in the event of a cooling system failure.  However, when all CRACs operate equally, each CRAC runs at a lower (less efficient) utilization, meaning that the discharge air from each CRAC will be higher.  Some CRACs, in effect, may not be cooling at all, which means that in a raised floor data center, those that are not actually cooling are blowing return air into the underfloor plenum.  Since the largest source of thermal mass in a data center is the slab floor, this means that this ”always-on”, and/or low set point approach to CRAC operation may not yield the best utilization of thermal mass.

A “just-needed” operation policy is preferable in terms of both catastrophic risk mitigation and energy efficiency.  In this case, the most efficient CRACs are operated most of the time, and the less efficient CRACs are kept off most of the time – but held in ready standby.   Even when CRACs are nominally the same, there can be significant differences in their cooling efficiency due to manufacturing variability.  These differences, if measured or characterized, can be utilized to further optimize efficiency and mitigate the risk of catastrophic failure.

Sometimes the obvious or even most commonly used cooling strategy isn’t the best strategy, particularly as rising energy costs become more of a concern.   An operating strategy that recognizes and anticipates the possibilities of “little failures,” while focusing on the avoidance of catastrophic failure and reducing energy costs, is not only forward looking but also represents best practice.

« Newer Posts