Showing posts with label data center. Show all posts
Showing posts with label data center. Show all posts

The Texas Experience

 Once again it has been many months since I have posted anything. Since then we have obviously been dealing with the pandemic as well as learning the impact of a weak cybersecurity policy. Although I could probably write a novel regarding the cybersecurity issue there are many more expert people on that topic.

Today I want to go back to the original purpose of this blog that was to capture and share some random thoughts related to the industry.

One of the latest events that started the wheels turning in my head again was the partial shutdown of the Texas electrical grid. Again there are experts in grid design and operation who can, and have, discussed this event in great technical detail. There has also been the, now expected, political finger pointing and partisan debates. But I wanted to share some of those random thoughts about what this "localized" event might teach us.


Many engineers, especially those involved in critical HVAC infrastructure, are already aware of the fragile nature of the electrical grid in many parts of the US. At Mestex we provide systems to many different types of applications where a power outage could have a costly impact. Preparing for those potential failures was usually someone else's problem. The electrical engineers and suppliers of standby power systems were intended to handle the "short" periods without electrical power. Although we researched ways to integrate backup operation into our equipment the results would have proven too expensive to be marketable. What Mestex has continued to focus on is applying systems that try to utilize "site resources" efficiently in order to reduce the demands on the backup systems.

Looking at a broader picture we have seen the global trend toward "electrification". The goal, it seems, is to reduce greenhouse gases and other atmospheric pollutants that contribute to climate change. While some are still skeptical about climate change it has reached the point of a consensus among scientists around the world. Many countries and large corporations are on board with taking steps to mitigate their impact. Electrification is intended to move the source of pollutants away from the "site" where power is used to the "source" where power is generated. In theory this would allow better control of contaminants at a single point instead of at hundreds or thousands of "site" points. This would also facilitate the use of alternative energy sources such as wind generators that would be difficult to implement at the "site" level. So we have a relative rush to requiring electrical vehicles, electric residential heating systems, and even electric commercial/industrial heating.

At the same time that electrification is moving forward in areas that the average person can see there is a convergence of our ever increasing digital life with our daily power consuming life. Data centers are being built and turned on almost daily around the world. To the average person this is great as it means they are always connected no matter where they go. This also helps the electrification effort by providing the opportunity for sophisticated remote traffic management, demand control power distribution, and "smart" home appliances. But the electrical power consumed by data centers is almost mind blowing to the average person. A single, moderately efficient, small data center can consume as much electrical power as five thousand homes. Clusters of large data centers (as is common due to scale and locale) can draw enough electrical power to support entire towns or even entire less populated countries.

When these data centers or data center clusters are inserted into an already fragile electrical grid they add a strain factor that was not anticipated when the power station was designed 20 or 30 years ago. Data centers can be designed, built, and activated in months versus power stations that require years to complete. It is inevitable that a mismatch of power supply and power demand will occur.

It seems to me that part of what the Texas experience showed us is, first, electric power is critical to basic life support facilities such as water and sanitation. My second thought is that as much thought and research should be put into the development of highly efficient, "clean", "site" energy systems as into the electrification idea. Off-loading the grid with effective "site" solutions could help with the balance of supply and demand on the grid. Many large companies have already taken steps with solar arrays over their parking lots, small-scale wind generators on site, or private co-generation plants. In most cases though these are extremely expensive solutions. Their implementations have been driven as much by corporate "green" initiatives as anything.

Companies should also not lose sight of current technologies that are still viable "site" solutions and counterbalances to grid overloads. Although Mestex has transformed itself over the last few years into generating more revenue from cooling solutions than from their traditional natural gas heating solutions most people still consider the company to be a gas heating company. In applications that require large amounts of outside air, or that simply move huge amounts of air that must be heated, a modern and efficient natural gas heating system is a much more "climate friendly" "site" solution than an equivalent electric heat "source" solution. Mestex can provide such systems based on their decades of manufacturing such systems and research into optimized digital control of such systems. 

 Engineers and companies can meet their goals of responsible environmental stewardship by keeping in mind the contribution of "site" solutions as they also work to meet the transition to greater electrification.

Mestex Data Center Research Published

One of the initiatives within Mestex is our collaboration with the University of Texas at Arlington ("UTA") and the National Science Foundation ("NSF") Industry University Cooperative Research Centers ("I/UCRC").  Our area of particular interest and research support is related to reducing the energy consumption of data centers via the use of outside air and evaporative (adiabatic) cooling solutions.  Most of this research stays hidden behind closed doors but when the research results in findings that can significantly aid the mission critical industry those results are published.

Over the last several years, Mestex has hosted a small data pod at our facility in Dallas, Texas.  That data pod has been cooled by an "off the shelf" Mestex product with the only variations from the standard catalog item being the ability to add special filters (one of the research areas regarding particulates) and the control software designed by Mestex specifically for data centers.  We recently received the following email from Dr. Dereje Agonafer, Presidential Distinguished Professor at the University of Texas at Arlington.


Congratulations Dr. Shah and Dr. Awe and the Mestex team on getting this paper published in a journal.



We continue to be proud of our working relationship with Mestex – resulted in significant research implemented in product applications as well as archival publications. In 2016, our joint work was featured in: “Breakthroughs from NSF I/UCRCs appeared on the 6th edition of the Compendium, published in a printed book and online, and is intended for Congressional and White House staffers and visitors to the NSF, and for members of the general public to help them realize the impacts of research taking place within I/UCRCs. Our joint work with Mestex was featured in the book – reference below:



2016 Compendium, Successful Industry-Nominated Technological Breakthroughs for NSF I/UCRCs in “More Efficient Data Centers: Maximizing Airside Cooling,” p. 111-112, http://faculty.washington.edu/scottcs/NSF/2016/NSF-book-2016-Final.pdf

The research documented in this paper helps to show data center industry designers and operators the benefits of Airside Cooling for their centers.  Further research findings were just published in the ASME Journal of Electronic Packaging.  That research, a collaboration between Mestex, UTA, and IBM focused on the reliability of cooling data center electronic equipment with outside air in a somewhat dirty environment.  On going research between Mestex and UTA is now focusing on filter performance.

External Events and Human Error

I guess you can tell that time really does fly by as you get older when you look back at the last time you posted to your blog and see that it was 2 years ago!  That is absolutely crazy but true!

The title of this blog has always been "Mike's Random Thoughts" because it gives me the room to post any variety of thoughts or comments.  Those thoughts probably seem to come from left field at times and here comes another.

I am a big fan of Formula One racing.  This past weekend I was watching the race at Silverstone in England and a couple of things struck me.  First is the randomness of external events and second is how easily a multimillion dollar investment can fall victim to human error.

The pole sitter for the race, Valtteri Bottas, was sitting in first place and defending his position against his teammate, Lewis Hamilton.  His race seemed completely under control and a decision was made by team management to bring Valtteri into the pits for a routine tire change.  On a well run team this takes less than 3.5 seconds so little risk to his lead was imagined.  Bottas rejoined the race in perfect position to continue on for the win while his teammate continued to circle around on tires that were getting worse and worse.  And then...an external event intervened and took that win away.  One of the Alfa Romeo cars locked its brakes and slid off the track forcing a "safety car" event.  Cars that had not changed tires yet basically got a "free" pit stop.  Lewis Hamilton took his "free" pit stop and never looked back.

So the observation from this first random thought is that you can have the best strategy for winning and the smartest management team in your race and watch it all evaporate because of something outside of your control.  In our business of providing natural gas powered cooling systems to indoor agriculture facilities we often run across customers who have a great growth strategy and strong management but who run into external issues that derail their plans.  If the local electric utility limits the available power to the site you have selected (or charges huge dollars to provide the service) then the whole plan could blow up.  Sometimes it is not possible to have a backup plan...like in the case of Bottas….but your management team needs to be ready to minimize the damage quickly by considering and implementing alternative ideas.  Bottas ended up finishing second because of quick thinking on the part of management that minimized the number of places he lost in the race...your team needs to be ready to do the same thing.

The second observation from the race involves Ferrari...one of the most famous names in motor racing.  Ferrari, like Mercedes and other top Formula One teams, has a racing budget that is measured in the hundreds of millions of dollars.  Hundreds of engineers are employed to design the cars and then monitor and diagnose hundreds of sensors on the cars in real time during a race.  It is a little like launching the Space Shuttle every two weeks.  In spite of this massive investment in machinery and engineering talent sometimes things go very wrong.  At Silverstone that something was simply human error.  Ferrari's number one driver had just been past by a key rival as they were entering a tight turn on the track.  A split second loss of concentration and the Ferrari crashed into the rear of the rival's car sending them both flying off the track.  In milliseconds a multimillion dollar investment was lost to human error.

One of our other key market areas is providing cooling to data centers.  At a recent event I heard that building a data center costs about 7 million dollars per megawatt of processing power.  This is a huge investment and our cooling systems are a relatively small part of that investment.  But....human error can bring that entire investment grinding to a halt.  Changing a single line of code in the controls for the HVAC equipment can cause the essential cooling systems to fail to perform.  As a company we have been writing our own code for the critical operation sequences of our equipment based on our years of experience manufacturing such systems.  However, we are frequently asked to provide control sequences that are defined by the data center owner.  In a fairly recent case we were told to change a line of code in a control sequence by a less experienced customer engineer.  As a result a changeover from one cooling mode to another did not occur as that person anticipated and the data center temperatures spiked.  While not as overt or obvious as running into another car at 100 mph the result of both human errors was extremely expensive.

Unforeseen events or human error....either can upset the best laid plans but making quick and experienced decisions...guided by people or companies with years in the business can mitigate the impact of those problems.

ASME Paper Documents Reliable Data Center Operation With Outside Air and Evaporative Cooling

One of the longest running and most debated topics regarding data center operation is whether or not you can reliably cool a modern data center using outside air alone or with supplemental evaporative cooling.  If you can successfully operate a data center without using any compressorized equipment there are obviously huge energy and money savings.

The two key factors that hold operators back from implementing an obvious saving strategy is fear of equipment failures due to temperature/humidity excursions and due to airborne contaminants.  While server manufacturers publish data in their specification sheets that clearly indicate that their equipment can tolerate a wide range of temperature and humidity there is not much information regarding the impact of particulates and other contaminants.  ASHRAE has recognized the robustness of modern IT equipment by expanding the recommended and allowable temperature and humidity ranges in their widely followed data center design guidelines.  Very little is said regarding air quality other than a recommendation to use at least a MERV 8 filter system.

Over the last 5 years the Mestex division of Mestek has hosted a National Science Foundation research site at their manufacturing facility in Dallas.  This site is part of an Industry/University Cooperative Research Center with principal research from the Mechanical and Aerospace Engineering Department at the University of Texas at Arlington.  A fully instrumented "data pod" has been operating using a commercially available indirect/direct evaporative cooling system from Mestex that can also operate in 100% fresh air mode.  In addition to the dozen sensors normally included with the Aztec brand IDEC system from Mestex the "pod" includes an array of 64 sensors located on the front and rear of the four server racks.  Data has been streamed from all sensors every 15 seconds for the last 4 years.  In addition to this detailed tracking of temperature and humidity conditions there have been a number of studies conducted using copper and silver coupons to evaluate the corrosion potential of operating using outside air and evaporative cooling.  Keep in mind that this application is in Dallas, Texas...a relatively hot/humid climate area.  In addition, because the "pod" is installed between two manufacturing buildings in an industrial zone near downtown Dallas the measured air quality around the "pod" is classified as G2, or moderately harmful to PCBs.

The June, 2017 Volume 139 edition of the ASME Journal of Electronic Packaging includes a paper presenting the results of the last 4 years of the research at this site.  The paper entitled "Qualitative Study of Cumulative Corrosion Damage of Information Technology Equipment in a Data Center Utilizing Air-Side Economizer Operating in Recommended and Expanded ASHRAE Envelope" provides a comprehensive look at the impact of operating a data center in a "real world" application.

The most interesting point presented in the summary section of the paper is that, in spite of the servers installed in this test site already being several years old, there has not been a single server failure in the entire four years of operation.  The ability to dramatically reduce the cost of operating a data center...without unfounded concerns about reliability...is finally being proven true.

On The Road Again

To quote one of my favorite musicians, Willie Nelson, we are going ..."On the road again, just can't wait to get on the road again..." this time to Las Vegas for the 2017 AHR Expo


The Mestex division of Mestek will be sharing booth space with our sister companies in booth C1525 in the Las Vegas Convention Center.  Some people are predicting a record turnout of attendees and we expect a busy few days.

This year, Mestex will be using some new (for us) graphical display technology from the Mestek Technology group to help explain some of our newer product offerings.  Our division companies; Applied Air, Aztec, Alton, LJ Wing, Temprite, and King provide solutions to temperature, pressure, airflow, and filtration problems that can be hard to explain using a static piece of equipment.  This graphical display technology will allow us to "walk you in" to three of our products and highlight how certain elements of the products can be used to address your building or process issues.

In addition to these graphics the Mestex people in the booth can explain how our in house CFD analysis services can help optimize a solution.  Projects ranging from large e-commerce warehouses and distribution centers to data centers to "indoor agriculture" grow rooms can be very difficult to design due to high internal thermal loads, humidity levels, stratification, or pressure gradients and CFD allows Mestex to thoroughly analyze and sort out possible solutions.

So come on by the booth and, at least, say "hi".  We would love to discuss how we might help solve your application problems.

Why Do We Design Thermos Bottles?

Over the last couple of months since my last posting I have been very busy managing our movement into new markets and grasping at new opportunities.  One of the benefits of taking the deep dive into these markets is getting to look at some of the details of product design and application to the specific problem to be solved. 

This has raised a question in my mind.

Why does the mission critical industry design "thermos bottles" and then fret over the cost of and methods of getting rid of the heat that all those servers generate? 


There is something that strikes me as illogical about creating buildings or modular data centers with super insulated walls and ceilings that are guaranteed to trap the heat that is dumped into the hot aisle (assuming they have aisle separation).  Then the mechanical system is tasked with rejecting all of the pent up energy without costing the owner a fortune.  Is it any wonder that data centers are one of the largest consumers of electrical energy in the world?

Centuries ago architects and designers figured out that it is more efficient to cool a space if you simply dump the heat out to the atmosphere.  Buildings used to be designed to take advantage of stratification and stack effect to cause the hot air generated in the space to rise and leave the building.  No need to cool the air back down to a reasonable temperature and put it back into the space so that you can heat it all up again.  Lofted ceilings and roof lines came into the design world for a reason. 

So, why is the data center different?  Frankly, I don't know.  Why not take the hot aisle air and vent it out to the atmosphere?  Sure, you have to replace that exhausted air with new air from the outside but unless the data center is located in Death Valley the odds are that the air being brought into the building is at a lower temperature than the air that would be recycled from the hot aisle of a data center designed to operate under the latest ASHRAE TC 9.9 guidelines for best practices. 

My best guess why we continue to do what is intuitively illogical is inertia.  "We have always done it that way".  I think it is time to rethink the old ways and come up with creative solutions in the design of data centers.

An "Open Access Project" Update


The Mestex "Open Access Project" continues to move forward so I thought I would provide a brief update on the current research activity and the plans for the next few months.

The installation at the Mestex facilities in Dallas has been brought up to the expected final configuration with a total of 120 servers, intelligent PDUs, and switches distributed over 4 cabinets.  We have separated the hot and cold aisles with a combination of a hard wall and flexible "curtains"...this has turned out to be one of the more important features of the installation.  The indirect/direct evaporative cooling system is fully functional although we have also found the need to increase the hot aisle exhaust pressure relief in order to reduce the "back pressure" in the hot aisle. 

In addition to the combination temperature and humidity sensors that are part of the standard Aztec control system, and used by the DDC control system to manage the operation of the Aztec unit, we have also installed 32, 10K thermistors.  These sensors are used to feed information to our data acquisition system that is running in the background collecting more granular detail about the system performance.  These sensors are located on the fronts and backs of the cabinets.

As I mentioned, we have spent some time resolving hot aisle/cold aisle separation issues.  Although the Aztec unit is monitoring cold aisle pressure and operating the supply fan to maintain a target positive pressure in the cold aisle we found that we still had hot aisle air migrating back into the cold aisle.  Over the last few days we have spent time filling small gaps and sealing around the cabinets more carefully and the results were immediately noticeable.  The cold aisle temperature was reduced by 5 to 6 degrees F. 

The other factor contributing to better separation was the reduction of the "back pressure" in the hot aisle.  We had addressed some of this earlier by removing the standard room exhaust grill and replacing it with a screen that had much greater free area.  While that made a measureable difference in server temperature rise we had simply moved the pressure issue from inside the data pod to the return air ductwork on the Aztec unit.  That has now been resolved by doubling the size of the pressure relief openings in the return ductwork.  Supply fan operation is now improved, server temperature rise is now on target, and supply fan motor power consumption has been reduced.  We monitor and report real time PUE for the pod and these changes have lowered the real time PUE to between 1.08 and 1.35, depending upon the system operating mode.

Now that we are beginning to see the kind of stable operation that we were anticipating we have started to plan the next phases of the research.

The Aztec unit is designed to operate in three modes, or some mixture of those modes, depending upon the sensor inputs.  The unit can operate in 100% fresh air cooling mode, in an indirect evaporative cooling mode, or in an indirect/direct evaporative cooling mode.  Each of those modes introduces characteristics that the data center industry wants to research. 

The next round of research will focus on two aspects of fresh air/evaporative cooling:

  • We will be installing coupons in the space to collect data on contaminants and their potential impact on the circuits in the servers.  This project is projected to run for at least 1 month and support is being provided by IBM.
  • Following the collection of this data (and possibly overlapping) we will be installing particle count measuring devices.  These devices will be installed upstream of the filters in the Aztec unit, downstream of the filters, within the cold aisle, and within the hot aisle.  The filter racks in the Aztec unit will allow us to evaluate filters of different MERV ratings and see how well they perform in a typical HVAC unit installation versus the controlled lab environment.

As you can tell, this site offers a unique opportunity for researchers to take their lab research findings and compare them to a real world application with real world equipment.  Mestex is pleased to be a part of this NSF sponsored research into data center cooling technologies.  We will be hosting a tour for the industry advisory board of the NSF-I/UCRC during their upcoming meeting at the University of Texas at Arlington.

Education and Training

Since I have been traveling extensively over the last few weeks I have not been able to give much thought to our blog.  However, the travels have also provided a little fuel for some comments.

First, I continue to be surprised/pleased to hear more and more presentations and discussions about evaporative cooling of data centers.  It seems that "the big guys" get it...cooling data centers costs a fortune using compressors/chillers and the servers can handle much higher temperatures than people realize.  If you run down the roster of large international web service or cloud service providers you will find that most of them have already implemented evaporative cooling or they have it in the construction plans. 

As great as this is there are still market forces that are conspiring against this highly efficient cooling solution.  One is the concern over humidity levels in the data center.  This concern is compounded by the common use of relative humidity as the conversation point when it is actually absolute humidity that should be considered.  This topic will likely be a point of debate for a long time to come since some of the larger companies have concluded that absolute humidity doesn't matter in their facilities...especially with 2 or 3 year server refresh rates...and other members of this progressive group are not sure and choose the "safe path" of limiting absolute humidity or dewpoint in their spaces.

The one area where it seems that all of the large players agree is with regard to temperature.  It is virtually universal that ASHRAE 9.9 recommended guidelines are acceptable and, for many of these users, ASHRAE 9.9 allowable temperatures are OK.

The challenge for the industry is still finding a way to filter this information and confidence down to smaller operators and owners.  I have heard it described as an education issue but is that truly the case?  It is hard to find a computer or data center related design publication these days that does not promote higher temperatures as a feasible solution for cutting operating costs.  Are we just too busy to read these articles or do we not believe the wealth of research and experience that backs up the statements?

At a recent conference on data center design I sat at a lunch table with a group of design engineers and a manager of 13 data centers.  When asked how he learned about managing those centers the response was that he was self taught by attending conferences and talking to "experienced" data center managers.  So his knowledge of the work by ASHRAE and others was not a major factor in deciding on appropriate operating temperatures.  What he was learning was what these other managers had been doing over the last decade...going back to the "old days" where electric costs were low and low data center temperatures were the norm and research had not shown that to be unnecessary.

So, if education is the issue then how do we go about it?  What mechanism will get the message through the daily clutter of information and time demands?  I don't have the answer...if I did I would implement it immediately....but it seems to be a key to moving the industry forward.

NEWS RELEASE


New data center construction expected to boom as demand triples

 

Mestex Open Access Project helps data center operators plan for “build as you grow” expansion

 
DALLAS, April 28, 2014 The digital revolution is sapping the power grid, but a new approach to data center construction may help reverse the trend of ever-increasing energy consumption for powering and cooling these facilities. To help data center operators better understand their options, Mestex, the industry leader in evaporative cooling systems, is providing a free tool to demonstrate how infrastructure can be better deployed to manage competing demands for more capacity and greater energy efficiency.

 “Data centers are the enablers of this digital revolution,” said Mike Kaler, president of Mestex. “The increase in global digital demand and cloud computing is exponential. As demand rises, data centers that house digital information consume more electricity, half of it being used to cool the facility. We wanted to help people see how energy is being consumed and ways for managing infrastructure and costs.”

The company believes intelligent technology combined with a flexible, scalable and energy-saving approach is the best way to “build as you grow.” Adding plug-and-play cooling units – such as Mestex’s own Aztec Evaporative Cooling Units – as capacity increases is the most economical strategy for data centers to manage expansion or new construction while reducing total cost of ownership. Aztec systems are proven to lower power usage by 70% when compared to traditional air conditioning; the system’s digital controls, when integrated with other building automation systems, can extend that savings even further.

To help data center operators get a realistic picture of how their own expansion might play out, the company recently launched the Mestex Open Access Project to provide information technologists, facility managers and financial executives the ability to evaluate energy-saving concepts in a real-world environment. 

“We’ve opened access to our equipment, controls and data, because we want to encourage energy savings and demonstrate to data center decision makers that there are smart, effective ways to increase efficiency and optimize operations,” Kaler said.

The web-based interface offers visibility into the physical plant and air conditioning system of an operating data center being tested as a part of a project spearheaded by the National Science Foundation. The “open access” gives anyone with Internet access an unembellished look at how a data center is operating, in real time, 24/7.

The Open Access Project harnesses the power of Mestex’s direct digital control (DCC) system, which comes standard on all of its HVAC products and can be easily integrated with other HVAC vendors’ products and building automation systems to create an intelligent network that controls cooling for optimal efficiency, performance and longevity, as well as provides web-based system monitoring and management.

 

Note:

Mestex President Mike Kaler will be hosting a presentation on mission-critical cooling systems on Wednesday, April 30, at 10:30 a.m. at AFCOM Data Center World at the Mirage Casino-Hotel, Las Vegas, Nevada. The company is exhibiting (booth #1227) at the conference April 28 – May 2.

Links:


Mestex Open Access Project Live View  (To access, Internet Explorer 10 or above is required. Enter “guest” as user name and password.)

 

# # #

 

Mestex (www.Mestex.com), a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, which include Applied Air, Alton, Aztec, Koldwave, Temprite and LJ Wing HVAC systems.

 

Media Contact:

Christina Divigard

Divigard & Associates

413 341 6780 or Christina@Divigard.com

 

 

NSF/Mestex Research Project Update at SEMI-THERM Conference in San Jose, California

Students from the University of Texas at Arlington will be presenting an update on the progress of a research consortium, partially funded by the National Science Foundation, that is focused on improving the efficiency of data center cooling. This presentation will be made during the SEMI-THERM conference in San Jose, California from March 9-13.

The work presented in this exhibit presents updates on this project since the last industrial advisory board (IAB) meeting in Villanova University in September 2013. The updates include completion of construction of an Aztec ASC-15 cooling unit, attachment of the cooling unit to an IT Pod, construction of internal details of the IT pod, construction of a duct for testing various cooling pads, creation of computational fluid dynamics (CFD) model for the IT pod and the ASC-15 unit.

The cooling unit, ASC-15, which is capable of operating in pure air-side economization, in direct evaporative cooling, in indirect evaporative cooling, and/or in hybrid modes, contains two blowers which can deliver up to 7000 CFM to the IT pod. Various parameters of the cooling unit, such as blower rotational speed, inlet air temperature, supply air temperature, outside air humidity, etc are available through an online portal. ASC-15 is connected to the IT pod at Mestex facility which is providing power and water to the modular research data center. Inside the IT pod, four cabinets, each containing thirty HP SE1102 servers, are placed in a hot/cold aisle configuration.

One of the HP SE1102 servers was tested in UT-Arlington lab to find out its maximum power consumption. The maximum measured energy consumption is used to calculate total dissipated heat per rack in the CFD model of the modular research data center. This CFD model will continue to be updated depending on changes to the IT pod or the cooling unit. For example, updates to the cooling pad model will be applied based on results from the various wet cooling pad tests that will be performed at UT-Arlington.

"Open Access Project" Update

Mestex is continuing to refine the systems and information stream as part of our "Open Access Project".  As a reminder, Mestex has established a research data pod as a member of an NSF project to improve the efficiency of data centers.  This pod is being cooled with an Aztec IDEC system managed by the Mestex DDC control system. 

The goals of the research project require frequent experiments.  Currently, Mestex is continuing to complete the basic infrastructure of the pod itself.  The video in the link below shows a quick walk-through update on the installation of a containment system.  As you can see, the installation is nearing completion.  In the meantime we are running the 51 servers, a 5 kw resistive load, and some ancillary equipment in the 4 cabinets that are in the pod.  https://onedrive.live.com/redir?resid=B173953F824409BC!7016&authkey=!AKcxEKWl7409R-8&ithint=video%2c.MP4

Since the IT equipment is operating while we finish the containment installation we are able to watch the performance of the IDEC cooling system in real time via  http://webctrl.aztec-server-cooling.com/.  This website is accessible to anyone and viewers can log on using the username "guest" and the password "guest".

One of the areas that concerns data center designers and operators is the performance of evaporative cooling systems "on the shoulder" when ambient dry bulb temperatures are relatively low and ambient RH is relatively high.  Last week we had an opportunity to observe just that situation.

The time was 7:55 in the morning.  The ambient conditions were what most people would consider to be the worst scenario for evaporative cooling…relatively low DB and relatively high RH…in this case, 59.6 DB and 89% RH.  Under those conditions most people would expect the cold aisle conditions to be either too warm or too humid.

However the cold aisle was operating at 78.9 DB and 56.7% RH.  The cold aisle setpoint is 80 DB.  The cooling tower integrated into the Aztec IDEC unit was on, the airflow dampers were positioned for 100% return from the hot aisle, and the system was providing enough cooling (even with this high RH) to maintain the cold aisle temp and an 11 to 12 degree rise across the servers.  Note also that these conditions are inside ASHRAE TC 9.9 A1 Allowable limits.
 
The "Open Access Project" will be continuing throughout 2014 and, most likely, into 2015.  This will provide ample opportunity to observe a wide range of operating environments for the IDEC system.  During that time there will also be research on filtration, fresh air cooling, and further refinement of control algorithms for fully integrated IDEC systems such as the Aztec product.
 

Indirect Evaporative Cooling Research Project Launched

Aztec ASC 3-D Model for CFD Research

ASME Paper Documents CFD Modeling of Aztec IDEC System

The 2013 ASME “International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems”, aka ”InterPACK 2013” is just concluding in San Francisco.  As the conference title implies there are papers and presentations from all over the planet that are focused on research into improving electronics, computers, and data centers.

One of those papers presents results from an on-going research project that Mestex has started with the College of Engineering at the University of Texas at Arlington.  This research project will likely go on for a couple of years and this paper presents some of the first findings that are being used to establish a “baseline” for the rest of the research.

The paper is listed in the proceedings as “InterPACK2013-73302”.  The human-readable title is “CFD MODELING OF INDIRECT/DIRECT EVAPORATIVE COOLING UNIT FOR MODULAR DATA CENTER APPLICATIONS” and the paper covers exactly what the title suggests.  The IDEC product that the paper covers is the Aztec ASC-20 and the goal is to establish that the factory data that we present in our literature can be validated against a detailed CFD model of the product.

By modeling the Aztec ASC-20 components and creating the 3-D CFD model using factory dimensional drawings the researcher was able to confirm that the published factory data is accurate and the ASC-20 will perform as predicted based upon the operating parameters.  This important result can be used to further our research into optimizing the performance of evaporative cooling and fresh air cooling solutions for mission critical/data center applications.  A full scale modular data center mock up is being installed at the Mestex facility and additional documentation and validation of the performance will be conducted over the next several months.  The CFD baseline model will be used to simulate filter performance and airflow changes prior to making the physical changes to the research module.

The Aztec, and Alton DEC, evaporative cooling products have been used in the industrial and large commercial market since 1946.  Over those 66 years the products have been refined and optimized.  This research project will take the product lines to another level of thermal performance, water use optimization, and control software optimization with a specific emphasis on the needs of the mission critical market.

Politics and the Building Industry

Climate Change Initiative


Coal Fired Power Plants in Danger
I am not sure how many folks listened to President Obama's speech this week regarding climate change initiatives.  I know that I was not one.  However, I have read the document that served as the background for the speech and there are some things in this document that folks in the building design community and mission critical world, in particular, should pay attention to.  Those things could have a significant impact on the types of systems that we can design and implement in the coming years.

The theme of the speech and the document is primarily reduction of carbon emissions and increases in "renewable" sources of energy.  There are some other things in the document that are focused on electric generation infrastructure.  However there is a potentially ominous element to that topic that is related to the overarching goal of reducing carbon emissions. 

By means of a "Presidential Memorandum" Mr. Obama has instructed the EPA to accelerate transitioning power plants to "clean" energy sources, i.e. anything but coal.  As we have seen in some other cases in the HVAC industry as soon as the EPA has a mandate of that sort they move quickly to implement regulations that may, or may not, be carefully thought out for the old "unintended consequences" issue.

In my opinion the danger is rapidly removing significant generating capacity from the grid at a pace that cannot be matched on the construction side.  Even though the document also outlines a directive to speed up permitting of power plants it is still a fact that building a multi-megawatt power plant can take years.  With coal being the primary energy source for roughly 40% of US power plants you can see how a too quick implementation of rules that curtail their use can lead to problems.  Many states already operate on the edge of rolling blackouts and brownouts each summer so shutting down or limiting coal fired plants could get ugly.

Exacerbating this problem is the rapid and continuing growth of the data center market.  When these things come on line they gobble up megawatts of generating capacity in a single site...and they can come on line in a matter of months, not years.  Even if they never reach full utilization the power companies must be prepared to provide that power.  ASHRAE and others have tried, somewhat in vain, to communicate that these centers can operate without the heavy energy use of compressors or chillers.  As long as the local electric utility still has generating capacity that can be allocated to the data center this is OK...although not a very "sustainable" approach if you believe in that concept.  But, if that same utility now has to shut down 10 or 20 percent of its generating capacity then there may simply not be enough power to allow the luxury of overly cold air in the data center.

The implications for other building types are similar, although not nearly as extreme.  Systems that optimize the use of outside air as their primary cooling source augmented by smaller compressor or chiller plants could become the basis of design.  Concepts such as chilled beams that utilize higher chilled water temperatures and minimal fan power might need to migrate to smaller buildings than you see them in today.  And building shells will need to make more extensive use of passive and active shading systems.

So, once again, the building industry is going to be impacted by external forces that may have the best of intentions but that will also require rethinking of how we design and operate those buildings.

Dusting Off Your Data

CONTAMINANTS IN THE DATA CENTER

Time to get back on my soapbox again…this time it is about “contaminants” in data centers as an excuse to avoid using fresh air cooling or having outside air enter the white space.  The bottom line is that unless your data center is located in “an emerging country” then the odds of a contaminant-created hardware failure in anything like a short time frame are about the same as winning the lottery…assuming you take some pretty basic steps in the design.

Contaminant control, or more correctly, concern over contaminant control has been around for decades.  I remember doing some research over 25 years ago on the impact of ozone on telecommunications equipment.  Bell Labs, as it was known long ago, had performed some pretty interesting tests to document what could be a very real problem under the right circumstances.  The results of those tests indicated that, with the exception of certain locations, the air in the equipment room was worse than the air outside so it made more sense to flush the room with outside air than to avoid bringing outside air into the space.

Particle and gaseous contaminants CAN be a problem if ignored.  However, the extent of the problem and how quickly it manifests itself needs to be considered. 

Phenomenon like copper creep and circuit bridging do occur…but only when the conditions at the server are right to support those failure modes.  Two things generally need to be in place for the failure mode to even begin.  First, there needs to be a fine coating of dust particles on the circuit boards.  Second, the relative humidity at the board needs to be at the deliquescent RH…or the point where the dust starts to absorb moisture and become “wet”.  If the RH is too low then dust might affect localized temperatures on the board but the mechanism to cause bridging simply does not exist.  The converse is also true…no dust…then no mechanism even with a relatively high humidity level.

Dust can come from anywhere.  Every time someone enters the data center they bring in some amount of dust particles.  Every time a box is opened in the data center particles are created.  And, yes, every time outside air is brought into the data center it is possible that dust can enter.  In fact, a data center with no outside air is actually vulnerable to the worst kind of dust intrusion…uncontrolled infiltration through doors, cracks, pipe openings, or wind pressure.  Maintaining a positive pressure in the white space helps to prevent infiltration and keeps the worst dust (and gases) out of the data center. 

ASHRAE, through the TC 9.9 committee, has set a target for data center “cleanliness”.  It is ISO Class 8.  ASHRAE has also noted that ISO Class 8 conditions can be met with a MERV 8 filter…a common and inexpensive filter available at virtually any HVAC parts house.  If the air being filtered is coming from the outside then ASHRAE recommends a MERV 11 or MERV 13 filter.  These might not be quite as common as the MERV 8 but they are also readily available and can fit in a standard 2” filter rack.

The interesting side note about the ASHRAE recommendations is just how extremely conservative they are.  ASHRAE recommends no more than 15mg/m3 of “fine” particles…defined as particles less than 2.5mm in size.  However, IBM (who should know something about computers) has a limit of 150mg/m3 and a “fine” particle definition of particles less than 5mm in size.

Once again, owners are being led down a path to purchase cooling systems and equipment that fail to optimize their energy savings through an inflated fear of something that happens very rarely in the developed world and is easily controlled with proper filtration.  Products such as our Aztec ASC indirect evaporative cooling systems are designed with MERV 14 filters in mind and can actually accept MERV 16 filters…the highest MERV rating point…that is suitable for operating rooms and can remove all bacteria and most tobacco smoke.  This allows the Aztec system to optimize the use of fresh air cooling and use a more efficient heat transfer system than air-to-air heat exchanger systems…and still exceed the extremely conservative ASHRAE recommendations for particulate control.

Equilibrium


Equilibrium…we all try to achieve it in our lives.  An argument can go on forever if both sides maintain a high energy level and refuse to cool things down.  An argument can end when both sides take it down a notch and each reaches a happy place that they can both accept…a state of emotional equilibrium.
Odd as it may seem your air conditioning system is trying to do the same thing…reach a happy state of equilibrium…a balance between the high energy state and the low energy state.  Fortunately the system won’t get there under most circumstances because when the high and low energy states are equal then the unit stops providing cooling.

To simplify our thinking about this, substitute the word “temperature” for the word “energy”.  Now remember back to your days in physics class and remember that energy flows from a high state to a low state until the two states match and then the flow stops.  An air conditioning system takes advantage of that basic law of physics by absorbing the warm energy in a room and sending it to a lower energy place where the warmth is released and the cycle can start all over again.
One problem in this description of an air conditioning unit is that we usually don’t want the high energy released back into our rooms so we have to send that energy outside the room, or building, to get rid of it.  We do that by using refrigerants or water to transport the heat energy.  We also have to be sure that when we send it outside that it is at a higher energy level than the outdoors.  That is why we have compressors (and chillers which are just really big compressors) in our systems.  The compressors act as both a pump and as a device to actually add energy to the fluid that is pumped through the cooling coil in the room.  If you grab the side of a pipe entering a cooling coil it will feel relatively cold.  If you grab the pipe between the coil and the compressor it will feel a bit warmer.  If you grab the pipe on the leaving side of the compressor you might burn your hand.  The system has added enough energy to make sure that when the refrigerant or water reaches the outdoors it is at a higher energy state than the air outside the building.  That can be quite a challenge in a place like Phoenix or Dubai.

Most manufacturers realize that their equipment might be installed in those climates so they pick components in their systems that can operate under those circumstances.  But there are limits to what can be done.
The most widely available commercial cooling systems on the market are DX packaged units.  In order to satisfy the largest market (and sell the most equipment) these units are intended to be used for comfort cooling of people.  Since most people are “comfortable” when their office is around 75 degrees these units are designed around that operating point.  That is their happy point and that is the temperature of the air that is being returned to the cooling coil where heat energy can be absorbed into refrigerant or water and then sent outside to be removed.  The units will continue to operate at higher temperatures but remember that we need to be sure we send the heat outside at a higher level than the air outside. 

If you read the technical manuals for virtually every rooftop unit on the market you will see that, for a lot of really esoteric reasons, that rooftop unit is designed to operate at no more than 90 degrees returning to the cooling coil.  At that point the combination of components in the rooftop unit will be “maxed out” if the outside air temperature is in the 120 to 130 degree range. When the outside air temperature is 135 degrees on a roof in Phoenix then the high energy state and low energy state are so close together that almost no work is done and the temperature of the air coming out of the air conditioning unit starts to go up because the system is no longer rejecting much heat.  The system has quit working as intended and the situation usually spirals out of control as more heat builds up in the refrigerant or water.  Eventually so much heat has built up, and is compounded by the compressor, that the system shuts itself down to protect itself.
So, what does all this rambling have to do with anything?  Most of my blogs lately have been about mission critical/data center energy issues.  It is a big deal, and lots of folks are working on solutions, but economics sometimes trumps clear thinking or limits what can be achieved.

We are seeing more and more data centers specified with DX rooftop packaged units.  While these are normally high quality products they were originally designed to be at a happy place with, at most, 90 degree air being returned to the coil.  In a data center that is being designed to the latest ASHRAE standards the cold aisle can be anywhere between 80.6 and 113 degrees F.  When you allow a 20 degree F temperature rise across the servers before you return the air to the unit then I think you can see the problem.  The rooftop unit is being asked to operate well above its built in safety circuit limits.  Thus you end up with a self-limiting factor on how effective you can be in reducing the operating expense in the data center.  Even if you believe that your servers will be fine at 80.6 degrees F your HVAC unit probably will not be so fine.  And, to be honest, this same logic applies to CRAC units as they are nothing more than split DX systems.  So you have self-limited your options to cold aisle temperatures of no more than about 70 degrees F and your data center costs more to run than it could.
There is a class of rooftop unit that is better equipped to handle these situations and that class of equipment is commonly known as a DOAS, or Dedicated Outdoor Air System.  These systems, like our Applied Air FAP product, have been designed to expect Phoenix type temperatures across the cooling coil.  Returning hot aisle air at 105 or 110 degrees F is well within their “normal” operating ranges.  These systems are more expensive than a conventional rooftop packaged unit because of the components that are selected but they also provide the operating range that will allow the designer and operator to take advantage of the elevated temperatures that ASHRAE and the IT equipment people recommend for reducing data center operating expenses.

How I Spent My Summer Vacation

It has been a while since I have posted anything to this blog...no, I was not on sabbatical on some desert island...I have been traveling around North America talking to consulting engineers, contractors, and data center owners and operators.  This posting just provides a few insights that I garnered over the last 2 months on the road.

First, the data center/mission critical market continues to occupy the minds and the design resources of many, many companies in the design community.  It is clear that this is a market segment that is vibrant and all indications are that it will continue to be for quite some time to come.  The latest issue of Datacenter Dynamics FOCUS indicated that the world is now consuming over 300 Tkwh annually to drive data centers, with the US consuming over 25 Tkwh alone.  The consumption in the US is projected to grow over 9% in 2013.  While this information points to a growing market it also points to the urgent need for improved operating efficiency in data centers.

Second, and related to the first item, is the lack of knowledge about new "best practices" in data center design.  I have talked to dozens of engineers, contractors, and data center people who are not aware of the latest design guidelines from ASHRAE.  In fairness, those guidelines were only officially announced a few weeks ago...but they have been rumored and discussed for about a year now.  I mentioned in one of my earlier posts that education of the design community is an important, and ongoing, task.  This has been reinforced to me over the last 2 months.

Third, for those engineers and contractors who understand and embrace the new standards, is the challenge of convincing the data center people to adopt those standards.  This is less of a problem at the top levels of the data center company than it is on the floor of the data center.  The IT equipment operators who live in "the white space" seem not to understand the allowable operating temperatures of the equipment that they manage every day.  I have heard many different reasons for their reluctance to adopt the new best practices but I think it comes down to fear.  Because of stringent SLAs the operators worry about losing any equipment for any period of time...even though there is mounting research that this fear is unfounded.

Fourth, I have heard of several cases where the local electric utility has started to put limits on the available service capacity for planned centers.  In the US we are so comfortable with the idea that our electric grid can provide unlimited power that we forget that is not true.  We have a fixed number of powerplants with only so much generating capacity.  With the tremendous growth of data centers, and data centers with 300 to 500 watt per square foot electrical demands, there is a limit to what a utility can do.  And timing is another element of the equation.  A data center can be built in a matter of months...a powerplant takes years.  So even when a utility sees the demand coming they cannot add capacity as quickly as the demand can be added.

So, these are a few observations from the last couple of months.  Of course there is more to the story and feel free to comment on this post with any questions you might have.  I will try to respond as quickly as possible.

How to Save Almost $100,000 Per Year In Your 1 Megawatt Data Center


Over the last few weeks while I have been traveling there have been some interesting bits of information released in the mission critical world.
For example, Dell introduced their 12th generation PowerEdge servers.  This generation of servers is warranted to handle temperature excursions up to 45 degrees C, or 113 degrees F, for up to 90 hours per year.  One of Dell’s rationales behind marketing the server at those conditions was to allow fresh air cooling in virtually the entire continental US.  Other research by Dell has indicated that their servers can operate 87% of the year in Washington, DC using fresh air cooling alone.

The energy saving potential of raising the inlet temperatures that high can be enormous.  Instead of running chillers or compressors 8,760 hours a year they are only operating 1,138 hours per year. 
To put that into numbers is difficult but let’s try a little example.

If the PowerEdge server power consumption is 300 watts then the cooling system must remove 300 watts times 8,760 hours per year or 2,628 kwh of heat (8,961,480 btu).  That can either be accomplished using mechanical cooling or fresh air cooling or a combination of the two.
A pretty efficient HVAC system will remove about 4.5 watts of heat per watt of electrical energy used.  So to cool that PowerEdge server using mechanical cooling will require 2,628,000 watts of heat divided by 4.5, or 584 kwh of compressor power.

To cool that same server using fresh air for 87% of the year will only require 75.8 kwh of compressor power.  Of course, the fan energy stays the same in both cases but the compressor savings of 508.2 kwh PER SERVER can really start to add up.  At an aggressive electric rate of 4.5 cents/kwh that amounts to $22.87 PER SERVER PER YEAR.
At modest densities of, say, 40 servers per rack the savings amounts to $915 PER RACK PER YEAR.  Now consider how many racks are in the typical server room or data center.  If the data center has a server load of 1 megawatt then a density of forty, 300 watt, servers per rack will translate into 83 racks.  So the annual savings would be almost $76,000 in this example.

To make the savings even greater the same HVAC unit that provides the fresh air could also provide indirect evaporative cooling and completely eliminate the compressor-based cooling…adding another $3.50 PER SERVER PER YEAR of savings.  That would add another $11,620 PER YEAR in savings.

Too Hot to Handle? A Simple Reminder

Well, this is embarrasing.  I have been in the HVAC industry for over 40 years now and have helped design and manufacture some of the more sophisticated products that have been introduced.  But, in spite of that I have to admit that I messed up.  And the lesson that I was reminded of can help you too if your residential, commercial, or mission critical system is struggling to keep up with the heat.

Over the past couple of weeks the temperature here in Texas has been over 100 degrees F every day...sometimes up around 105 to 110.  That is nothing unusual for Texas in the summer and not as bad as last year.  But I started to notice that my residential HVAC unit was no longer able to maintain my thermostat setpoint of 77 to 79 degrees F.  The system was consistently running 3 degrees behind and running non-stop...and was only installed a year ago.

Refrigerant leak?  Undersized?  Dog left the door open?

No...it was one of the most common problems in any HVAC system that is not running correctly...the condenser coil was coated with a fine film of dirt.  Let me repeat that...a FINE film of dirt.  Not clogged...not even very obvious at a quick glance...a FINE film.  In my case it was actually a fine film of dryer lint since the clothes dryer outlet was located behind the condensing unit...but the point is that had a service tech not looked at the coil with a flashlight I never would have noticed the dirt.  Running water over the coil from a garden house to wash off the film dropped the system head pressure and restored the system's ability to maintain the thermostat setpoint without running non-stop.

Many years ago Louisiana State University conducted some tests on residential HVAC systems to determine the impact of dirty condensing coils.  The results were eye-opening.  A fine film of dirt, similar to what I had on my system, would reduce system capacity by up to 20%.  If your home, business, or server room is too hot then imagine what giving it an extra 20% of capacity could do...and it would only cost you a bit of water and time to wash off the coils...with no service tech assistance required.

Availability

In the data center world there are several metrics related to “up time”.  You hear terms like SLA (“Service Level Agreement”) that define how many hours out of 100,000 that the servers are guaranteed to be up and running.  Data center people like to talk about SLAs that are “4-9s” or “5-9s”.  “4-9s” would be 0.9999…or the servers will be up 99.99% of the time.

There are a couple of other metrics that are directly related to the cooling equipment.  One is MTBF or “Mean Time Between Failures”.  Another is MTTR or “Mean Time to Repair”.  A third metric is the most meaningful and it is “Availability”.  “Availability” is a measure of how many hours out of 100,000 that the system will be available when you consider MTBF, MTTR, and routine service.
The formula for Availability is:    MTBF/(MTBF+MTTR)

When evaluating the Aztec indirect evaporative cooling unit and its components for a recent data center project; using MTBF and MTTR values from the Aztec Engineering Department, Technical Service Department, and Production Departments; the following “Availability” numbers can be derived:
  • For routine maintenance of evaporative media the “Availability” is 0.999871
  • For the MTBF for the pumps and motors the “Availability” is 0.9999333 to 0.9999555
Since the routine maintenance “Availability” is one that can be planned in a way that will not disrupt the overall “Availability” of an N+1 ,or better, facility it really doesn’t matter that it is only “3-9s”.  In the cases where a failure might occur (MTBF cases) the typical Aztec product is “4-9s” across the board.

The Aztec indirect evaporative cooling unit can achieve these high levels of "availability" due to the inherent simplicity of a typical evaporative cooling system.  Fans and pumps are considered to be the only significant components in an evaporative cooling system that can fail.  In the case of the Aztec product these components are selected for an expected life of 200,000 hours...probably far longer than the building itself will be used for its original purpose.

A final consideration that was reviewed during this analysis was the skill level required for each repair or maintenance task.  Although this factor cannot be included in a typical metric such as MTTR it is an important factor for the building owner to consider.  Since an evaporative cooling unit such as the Aztec unit contains no refrigerants the vast majority of tasks can be accomplished by what would traditionally be called facilities maintenance personnel.  No special licensing would be required.  It actually turns out that some of the smallest elements of the system are the only ones that might require a licensed service technician.  Replacing contactors and relays in the unit control and power circuits would most likely require a licensed electrical service technician.

In general the "availability" of an evaporative cooling system, such as the Aztec system, will be at least as high as any competing technology and, likely, higher.


Preaching to the Choir

Electrical Power Meters Keep Spinning
I have had a busy few weeks traveling to meetings and visiting with owners, operators, engineers, and researchers.  This has given me an interesting perspective and awareness of an issue that our industry needs to address.  My awareness of this issue was increased by an editorial in Mission Critical Magazine that bemoaned the lack of progress in data center design due to secrecy regarding "best practices".

I came away from all of those meetings with the sense that there are many very smart people who know how to design more efficient solutions to energy use in mission critical applications.  "Best practices" can be described by experts from the largest server manufacturers, global data center developers/operators, and from academia.  The issue is that we are all sitting around a large table in a closed meeting room and sharing that knowledge with others who already have a pretty good idea what to do.  We are "preaching to the choir".

The result is that the vast majority of data centers, server rooms, and telecom facilities are operating in very inefficient ways.  While a Microsoft might be able to design a data center with a 1.2 PUE the rest of the world is struggling to reach a 2.0. 

This came out in a technical committee meeting at ASHRAE's mid-year meeting a few days ago.  A comment was made by a server cooling system manufacturer that he finds it very difficult to convince smaller users to adopt the latest operating standards that could save the user tens of thousand of dollars a year in energy costs.  This sentiment was echoed by several around the room and pointed to how difficult it has been to educate the broader public on the reliability of modern equipment in warmer rooms.

And when I say "broader public" I mean just that.  The mechanical design director for a global retail data center operator told me that he knows his equipment will run just fine at 78 or 80 degree F inlet temperatures but his customers have not gotten the message and demand a "cold" room.  It seems that until corporate IT managers and executives understand all of this we will continue to see skyrocketing energy use by data centers.  Even small server rooms could benefit from elevated temperatures if key elements of "best practices" were implemented.  So called "legacy" data centers might be difficult to retrofit but they can certainly be upgraded with the basic elements of "best practices"...if only the occupants understood what is possible.

The industry has a massive educational challenge if it is to stem the rising cost and consumption of energy.  And the education cannot come soon enough because the projections are that server power densities will continue to climb and data storage power densities will climb even faster.  Today we talk about 300 watt per square foot densities but systems are being designed already that push almost 10 times that density.  It may seem that we have an endless supply of power from the grid but there are only so many power plants around the world and building a new one takes a decade or longer...data power consumption grows at a much faster rate and will stress grids around the world eventually if we cannot educate the "broader public" more effectively.