Showing posts with label evaporative cooling. Show all posts
Showing posts with label evaporative cooling. Show all posts

The Texas Experience

 Once again it has been many months since I have posted anything. Since then we have obviously been dealing with the pandemic as well as learning the impact of a weak cybersecurity policy. Although I could probably write a novel regarding the cybersecurity issue there are many more expert people on that topic.

Today I want to go back to the original purpose of this blog that was to capture and share some random thoughts related to the industry.

One of the latest events that started the wheels turning in my head again was the partial shutdown of the Texas electrical grid. Again there are experts in grid design and operation who can, and have, discussed this event in great technical detail. There has also been the, now expected, political finger pointing and partisan debates. But I wanted to share some of those random thoughts about what this "localized" event might teach us.


Many engineers, especially those involved in critical HVAC infrastructure, are already aware of the fragile nature of the electrical grid in many parts of the US. At Mestex we provide systems to many different types of applications where a power outage could have a costly impact. Preparing for those potential failures was usually someone else's problem. The electrical engineers and suppliers of standby power systems were intended to handle the "short" periods without electrical power. Although we researched ways to integrate backup operation into our equipment the results would have proven too expensive to be marketable. What Mestex has continued to focus on is applying systems that try to utilize "site resources" efficiently in order to reduce the demands on the backup systems.

Looking at a broader picture we have seen the global trend toward "electrification". The goal, it seems, is to reduce greenhouse gases and other atmospheric pollutants that contribute to climate change. While some are still skeptical about climate change it has reached the point of a consensus among scientists around the world. Many countries and large corporations are on board with taking steps to mitigate their impact. Electrification is intended to move the source of pollutants away from the "site" where power is used to the "source" where power is generated. In theory this would allow better control of contaminants at a single point instead of at hundreds or thousands of "site" points. This would also facilitate the use of alternative energy sources such as wind generators that would be difficult to implement at the "site" level. So we have a relative rush to requiring electrical vehicles, electric residential heating systems, and even electric commercial/industrial heating.

At the same time that electrification is moving forward in areas that the average person can see there is a convergence of our ever increasing digital life with our daily power consuming life. Data centers are being built and turned on almost daily around the world. To the average person this is great as it means they are always connected no matter where they go. This also helps the electrification effort by providing the opportunity for sophisticated remote traffic management, demand control power distribution, and "smart" home appliances. But the electrical power consumed by data centers is almost mind blowing to the average person. A single, moderately efficient, small data center can consume as much electrical power as five thousand homes. Clusters of large data centers (as is common due to scale and locale) can draw enough electrical power to support entire towns or even entire less populated countries.

When these data centers or data center clusters are inserted into an already fragile electrical grid they add a strain factor that was not anticipated when the power station was designed 20 or 30 years ago. Data centers can be designed, built, and activated in months versus power stations that require years to complete. It is inevitable that a mismatch of power supply and power demand will occur.

It seems to me that part of what the Texas experience showed us is, first, electric power is critical to basic life support facilities such as water and sanitation. My second thought is that as much thought and research should be put into the development of highly efficient, "clean", "site" energy systems as into the electrification idea. Off-loading the grid with effective "site" solutions could help with the balance of supply and demand on the grid. Many large companies have already taken steps with solar arrays over their parking lots, small-scale wind generators on site, or private co-generation plants. In most cases though these are extremely expensive solutions. Their implementations have been driven as much by corporate "green" initiatives as anything.

Companies should also not lose sight of current technologies that are still viable "site" solutions and counterbalances to grid overloads. Although Mestex has transformed itself over the last few years into generating more revenue from cooling solutions than from their traditional natural gas heating solutions most people still consider the company to be a gas heating company. In applications that require large amounts of outside air, or that simply move huge amounts of air that must be heated, a modern and efficient natural gas heating system is a much more "climate friendly" "site" solution than an equivalent electric heat "source" solution. Mestex can provide such systems based on their decades of manufacturing such systems and research into optimized digital control of such systems. 

 Engineers and companies can meet their goals of responsible environmental stewardship by keeping in mind the contribution of "site" solutions as they also work to meet the transition to greater electrification.

Mestex Data Center Research Published

One of the initiatives within Mestex is our collaboration with the University of Texas at Arlington ("UTA") and the National Science Foundation ("NSF") Industry University Cooperative Research Centers ("I/UCRC").  Our area of particular interest and research support is related to reducing the energy consumption of data centers via the use of outside air and evaporative (adiabatic) cooling solutions.  Most of this research stays hidden behind closed doors but when the research results in findings that can significantly aid the mission critical industry those results are published.

Over the last several years, Mestex has hosted a small data pod at our facility in Dallas, Texas.  That data pod has been cooled by an "off the shelf" Mestex product with the only variations from the standard catalog item being the ability to add special filters (one of the research areas regarding particulates) and the control software designed by Mestex specifically for data centers.  We recently received the following email from Dr. Dereje Agonafer, Presidential Distinguished Professor at the University of Texas at Arlington.


Congratulations Dr. Shah and Dr. Awe and the Mestex team on getting this paper published in a journal.



We continue to be proud of our working relationship with Mestex – resulted in significant research implemented in product applications as well as archival publications. In 2016, our joint work was featured in: “Breakthroughs from NSF I/UCRCs appeared on the 6th edition of the Compendium, published in a printed book and online, and is intended for Congressional and White House staffers and visitors to the NSF, and for members of the general public to help them realize the impacts of research taking place within I/UCRCs. Our joint work with Mestex was featured in the book – reference below:



2016 Compendium, Successful Industry-Nominated Technological Breakthroughs for NSF I/UCRCs in “More Efficient Data Centers: Maximizing Airside Cooling,” p. 111-112, http://faculty.washington.edu/scottcs/NSF/2016/NSF-book-2016-Final.pdf

The research documented in this paper helps to show data center industry designers and operators the benefits of Airside Cooling for their centers.  Further research findings were just published in the ASME Journal of Electronic Packaging.  That research, a collaboration between Mestex, UTA, and IBM focused on the reliability of cooling data center electronic equipment with outside air in a somewhat dirty environment.  On going research between Mestex and UTA is now focusing on filter performance.

ASME Paper Documents Reliable Data Center Operation With Outside Air and Evaporative Cooling

One of the longest running and most debated topics regarding data center operation is whether or not you can reliably cool a modern data center using outside air alone or with supplemental evaporative cooling.  If you can successfully operate a data center without using any compressorized equipment there are obviously huge energy and money savings.

The two key factors that hold operators back from implementing an obvious saving strategy is fear of equipment failures due to temperature/humidity excursions and due to airborne contaminants.  While server manufacturers publish data in their specification sheets that clearly indicate that their equipment can tolerate a wide range of temperature and humidity there is not much information regarding the impact of particulates and other contaminants.  ASHRAE has recognized the robustness of modern IT equipment by expanding the recommended and allowable temperature and humidity ranges in their widely followed data center design guidelines.  Very little is said regarding air quality other than a recommendation to use at least a MERV 8 filter system.

Over the last 5 years the Mestex division of Mestek has hosted a National Science Foundation research site at their manufacturing facility in Dallas.  This site is part of an Industry/University Cooperative Research Center with principal research from the Mechanical and Aerospace Engineering Department at the University of Texas at Arlington.  A fully instrumented "data pod" has been operating using a commercially available indirect/direct evaporative cooling system from Mestex that can also operate in 100% fresh air mode.  In addition to the dozen sensors normally included with the Aztec brand IDEC system from Mestex the "pod" includes an array of 64 sensors located on the front and rear of the four server racks.  Data has been streamed from all sensors every 15 seconds for the last 4 years.  In addition to this detailed tracking of temperature and humidity conditions there have been a number of studies conducted using copper and silver coupons to evaluate the corrosion potential of operating using outside air and evaporative cooling.  Keep in mind that this application is in Dallas, Texas...a relatively hot/humid climate area.  In addition, because the "pod" is installed between two manufacturing buildings in an industrial zone near downtown Dallas the measured air quality around the "pod" is classified as G2, or moderately harmful to PCBs.

The June, 2017 Volume 139 edition of the ASME Journal of Electronic Packaging includes a paper presenting the results of the last 4 years of the research at this site.  The paper entitled "Qualitative Study of Cumulative Corrosion Damage of Information Technology Equipment in a Data Center Utilizing Air-Side Economizer Operating in Recommended and Expanded ASHRAE Envelope" provides a comprehensive look at the impact of operating a data center in a "real world" application.

The most interesting point presented in the summary section of the paper is that, in spite of the servers installed in this test site already being several years old, there has not been a single server failure in the entire four years of operation.  The ability to dramatically reduce the cost of operating a data center...without unfounded concerns about reliability...is finally being proven true.

Relative Humidity – It’s all relative


A Guest Article by Jim Jagers

The other day I was conducting a training class, and we were discussing evaporative cooling. Someone said they didn’t think evaporative cooling would work very well in their area because the summer temperatures were 90°F plus with 90% RH. If you were to look at many psychrometric charts, you’d see this point is, dare I say it, “off the chart”. To get a feel for this consider a steam room has general temperature of 104°F and 100%RH. At 90°F with 90% RH the heat index is 122°F. It’s doubtful the temperature and humidity are as bad at the same time as he imagined.

People generally associate high temperatures with high humidity percentages. It’s more likely that high temperatures will be associated with lower humidity percentages. At 80°F and 41%RH the heat index is 80°F. 80 degrees feels like 80 degrees. At this point there is approximately 0.009 pounds of moisture per pound of dry air in the atmosphere. If the moisture content remained constant and the air warmed to say 90°F, the relative humidity would actually drop to about 30%. Conversely, if the moisture content remained constant and the temperature dropped to 70°F, the relative humidity would increase to about 57%. This is because cooler air can hold less moisture than warmer air, and relative humidity is the ratio of the moisture in the air compared to the amount of moisture the air (at a specific temperature) can hold expressed as a percentage.

People usually think of their air conditioner as providing cool dry air in the summer, and it does because it does both sensible and latent cooling. Sensible cooling lowers the temperature we sense, and latent cooling removes the moisture. The air entering the coil may be 78°F and have 0.0101lbs of moisture per pound of dry air. The coil temperature may be 45°F and thus the leaving air may be 60°F (It won’t be 45°F because the water in the air is absorbing some of the cold). At this point the leaving air may have a moisture content of 0.0062lbs per pound of dry air. This is a significant reduction in moisture, and it is evidenced you water dripping from the evaporator coil. The leaving air is much dryer than the entering air.

However, in relative terms the air coming off the evaporator coil in the air handler has a relative humidity of 100% or close to it. Remember, cool air can’t hold as much water as warm air. When the air entering the coil, contacts the cold fins it cools rapidly. Condensation occurs when air can’t hold the moisture it contains. At this point the air is fully saturated meaning its relative humidity is 100%.

The point to this brief essay is, as I said at the start, relative humidity is all relative - to the moisture in the air and the air temperature. Warm air isn’t necessarily humid; cool air isn’t necessarily dry, relatively speaking.

Why Do We Design Thermos Bottles?

Over the last couple of months since my last posting I have been very busy managing our movement into new markets and grasping at new opportunities.  One of the benefits of taking the deep dive into these markets is getting to look at some of the details of product design and application to the specific problem to be solved. 

This has raised a question in my mind.

Why does the mission critical industry design "thermos bottles" and then fret over the cost of and methods of getting rid of the heat that all those servers generate? 


There is something that strikes me as illogical about creating buildings or modular data centers with super insulated walls and ceilings that are guaranteed to trap the heat that is dumped into the hot aisle (assuming they have aisle separation).  Then the mechanical system is tasked with rejecting all of the pent up energy without costing the owner a fortune.  Is it any wonder that data centers are one of the largest consumers of electrical energy in the world?

Centuries ago architects and designers figured out that it is more efficient to cool a space if you simply dump the heat out to the atmosphere.  Buildings used to be designed to take advantage of stratification and stack effect to cause the hot air generated in the space to rise and leave the building.  No need to cool the air back down to a reasonable temperature and put it back into the space so that you can heat it all up again.  Lofted ceilings and roof lines came into the design world for a reason. 

So, why is the data center different?  Frankly, I don't know.  Why not take the hot aisle air and vent it out to the atmosphere?  Sure, you have to replace that exhausted air with new air from the outside but unless the data center is located in Death Valley the odds are that the air being brought into the building is at a lower temperature than the air that would be recycled from the hot aisle of a data center designed to operate under the latest ASHRAE TC 9.9 guidelines for best practices. 

My best guess why we continue to do what is intuitively illogical is inertia.  "We have always done it that way".  I think it is time to rethink the old ways and come up with creative solutions in the design of data centers.

Mestex and the National Science Foundation Advisors Meet in Dallas

Although it has been far too long since I have posted to this blog due to my travel schedule I have some news to share.

Over the last few days (Oct. 1 & 2) we have been participating in the Industry Advisory Board ("IAB") meeting of the NSF-I/UCRC ES2 ("National Science Foundation-Industry/University Cooperative Research Centers Energy Smart Electronic Systems") research consortium.  That mouthful of letters represents a group of universities and companies whose expressed goal is to reduce the energy consumption of data centers by 20-35%.

The consortium is currently working on fourteen research projects and Mestex serves as an advisor ("mentor") on three of those projects.  Two of the projects that Mestex is mentoring cover research on evaporative and fresh air cooling of data centers and a second project on contaminants in data centers that use fresh air cooling.  As you might guess, the project on evaporative and fresh air cooling offers the greatest opportunity for the consortium to reach the stated goals.  In order to support that research, Mestex has installed a small data pod at it's facility in Dallas and is cooling that data pod with a commercially available Aztec ASC-5 unit.  The ASC-5 has built-in DDC controls that facilitate the use of multiple temperature and humidity sensors for control without any special modifications.  The controls also include a provision for pressure sensing control and that is also implemented in this case.

In addition to the data that is presented by the standard Aztec DDC controls there are additional thermocouples and sensors installed that are streaming data to researchers at the University of Texas at Arlington.

One of the most critical considerations that prevents many data center operators from reducing their energy consumption by huge amounts is the reluctance to introduce outside air to the facility.  The second Mestex project is focused on that research and we were fortunate to have the input of one of the world's experts on contamination control provide test coupons and laboratory analysis of the results.  Dr. Prabjit "PJ" Singh, of IBM, provides guidance and analysis to companies around the world and is a major source of information for the ASHRAE TC 9.9 committee on data center cooling.  Dr. Singh, Dr. Dereje Agonafer from the University of Texas at Arlington, and several members of the NSF IAB toured the Mestex facility at the conclusion of the meetings this week. 

Drs. Singh and Agonafer are shown here learning about the technology behind the patented "Digital High Turndown Burner" that was developed at Mestex.  Jim Jagers, Mestex Sales Manager,  conducted the tour and provided a "deep dive" into how this unique technology works before the group proceeded to the research data pod for additional discussions.



An "Open Access Project" Update


The Mestex "Open Access Project" continues to move forward so I thought I would provide a brief update on the current research activity and the plans for the next few months.

The installation at the Mestex facilities in Dallas has been brought up to the expected final configuration with a total of 120 servers, intelligent PDUs, and switches distributed over 4 cabinets.  We have separated the hot and cold aisles with a combination of a hard wall and flexible "curtains"...this has turned out to be one of the more important features of the installation.  The indirect/direct evaporative cooling system is fully functional although we have also found the need to increase the hot aisle exhaust pressure relief in order to reduce the "back pressure" in the hot aisle. 

In addition to the combination temperature and humidity sensors that are part of the standard Aztec control system, and used by the DDC control system to manage the operation of the Aztec unit, we have also installed 32, 10K thermistors.  These sensors are used to feed information to our data acquisition system that is running in the background collecting more granular detail about the system performance.  These sensors are located on the fronts and backs of the cabinets.

As I mentioned, we have spent some time resolving hot aisle/cold aisle separation issues.  Although the Aztec unit is monitoring cold aisle pressure and operating the supply fan to maintain a target positive pressure in the cold aisle we found that we still had hot aisle air migrating back into the cold aisle.  Over the last few days we have spent time filling small gaps and sealing around the cabinets more carefully and the results were immediately noticeable.  The cold aisle temperature was reduced by 5 to 6 degrees F. 

The other factor contributing to better separation was the reduction of the "back pressure" in the hot aisle.  We had addressed some of this earlier by removing the standard room exhaust grill and replacing it with a screen that had much greater free area.  While that made a measureable difference in server temperature rise we had simply moved the pressure issue from inside the data pod to the return air ductwork on the Aztec unit.  That has now been resolved by doubling the size of the pressure relief openings in the return ductwork.  Supply fan operation is now improved, server temperature rise is now on target, and supply fan motor power consumption has been reduced.  We monitor and report real time PUE for the pod and these changes have lowered the real time PUE to between 1.08 and 1.35, depending upon the system operating mode.

Now that we are beginning to see the kind of stable operation that we were anticipating we have started to plan the next phases of the research.

The Aztec unit is designed to operate in three modes, or some mixture of those modes, depending upon the sensor inputs.  The unit can operate in 100% fresh air cooling mode, in an indirect evaporative cooling mode, or in an indirect/direct evaporative cooling mode.  Each of those modes introduces characteristics that the data center industry wants to research. 

The next round of research will focus on two aspects of fresh air/evaporative cooling:

  • We will be installing coupons in the space to collect data on contaminants and their potential impact on the circuits in the servers.  This project is projected to run for at least 1 month and support is being provided by IBM.
  • Following the collection of this data (and possibly overlapping) we will be installing particle count measuring devices.  These devices will be installed upstream of the filters in the Aztec unit, downstream of the filters, within the cold aisle, and within the hot aisle.  The filter racks in the Aztec unit will allow us to evaluate filters of different MERV ratings and see how well they perform in a typical HVAC unit installation versus the controlled lab environment.

As you can tell, this site offers a unique opportunity for researchers to take their lab research findings and compare them to a real world application with real world equipment.  Mestex is pleased to be a part of this NSF sponsored research into data center cooling technologies.  We will be hosting a tour for the industry advisory board of the NSF-I/UCRC during their upcoming meeting at the University of Texas at Arlington.

Education and Training

Since I have been traveling extensively over the last few weeks I have not been able to give much thought to our blog.  However, the travels have also provided a little fuel for some comments.

First, I continue to be surprised/pleased to hear more and more presentations and discussions about evaporative cooling of data centers.  It seems that "the big guys" get it...cooling data centers costs a fortune using compressors/chillers and the servers can handle much higher temperatures than people realize.  If you run down the roster of large international web service or cloud service providers you will find that most of them have already implemented evaporative cooling or they have it in the construction plans. 

As great as this is there are still market forces that are conspiring against this highly efficient cooling solution.  One is the concern over humidity levels in the data center.  This concern is compounded by the common use of relative humidity as the conversation point when it is actually absolute humidity that should be considered.  This topic will likely be a point of debate for a long time to come since some of the larger companies have concluded that absolute humidity doesn't matter in their facilities...especially with 2 or 3 year server refresh rates...and other members of this progressive group are not sure and choose the "safe path" of limiting absolute humidity or dewpoint in their spaces.

The one area where it seems that all of the large players agree is with regard to temperature.  It is virtually universal that ASHRAE 9.9 recommended guidelines are acceptable and, for many of these users, ASHRAE 9.9 allowable temperatures are OK.

The challenge for the industry is still finding a way to filter this information and confidence down to smaller operators and owners.  I have heard it described as an education issue but is that truly the case?  It is hard to find a computer or data center related design publication these days that does not promote higher temperatures as a feasible solution for cutting operating costs.  Are we just too busy to read these articles or do we not believe the wealth of research and experience that backs up the statements?

At a recent conference on data center design I sat at a lunch table with a group of design engineers and a manager of 13 data centers.  When asked how he learned about managing those centers the response was that he was self taught by attending conferences and talking to "experienced" data center managers.  So his knowledge of the work by ASHRAE and others was not a major factor in deciding on appropriate operating temperatures.  What he was learning was what these other managers had been doing over the last decade...going back to the "old days" where electric costs were low and low data center temperatures were the norm and research had not shown that to be unnecessary.

So, if education is the issue then how do we go about it?  What mechanism will get the message through the daily clutter of information and time demands?  I don't have the answer...if I did I would implement it immediately....but it seems to be a key to moving the industry forward.

NEWS RELEASE


New data center construction expected to boom as demand triples

 

Mestex Open Access Project helps data center operators plan for “build as you grow” expansion

 
DALLAS, April 28, 2014 The digital revolution is sapping the power grid, but a new approach to data center construction may help reverse the trend of ever-increasing energy consumption for powering and cooling these facilities. To help data center operators better understand their options, Mestex, the industry leader in evaporative cooling systems, is providing a free tool to demonstrate how infrastructure can be better deployed to manage competing demands for more capacity and greater energy efficiency.

 “Data centers are the enablers of this digital revolution,” said Mike Kaler, president of Mestex. “The increase in global digital demand and cloud computing is exponential. As demand rises, data centers that house digital information consume more electricity, half of it being used to cool the facility. We wanted to help people see how energy is being consumed and ways for managing infrastructure and costs.”

The company believes intelligent technology combined with a flexible, scalable and energy-saving approach is the best way to “build as you grow.” Adding plug-and-play cooling units – such as Mestex’s own Aztec Evaporative Cooling Units – as capacity increases is the most economical strategy for data centers to manage expansion or new construction while reducing total cost of ownership. Aztec systems are proven to lower power usage by 70% when compared to traditional air conditioning; the system’s digital controls, when integrated with other building automation systems, can extend that savings even further.

To help data center operators get a realistic picture of how their own expansion might play out, the company recently launched the Mestex Open Access Project to provide information technologists, facility managers and financial executives the ability to evaluate energy-saving concepts in a real-world environment. 

“We’ve opened access to our equipment, controls and data, because we want to encourage energy savings and demonstrate to data center decision makers that there are smart, effective ways to increase efficiency and optimize operations,” Kaler said.

The web-based interface offers visibility into the physical plant and air conditioning system of an operating data center being tested as a part of a project spearheaded by the National Science Foundation. The “open access” gives anyone with Internet access an unembellished look at how a data center is operating, in real time, 24/7.

The Open Access Project harnesses the power of Mestex’s direct digital control (DCC) system, which comes standard on all of its HVAC products and can be easily integrated with other HVAC vendors’ products and building automation systems to create an intelligent network that controls cooling for optimal efficiency, performance and longevity, as well as provides web-based system monitoring and management.

 

Note:

Mestex President Mike Kaler will be hosting a presentation on mission-critical cooling systems on Wednesday, April 30, at 10:30 a.m. at AFCOM Data Center World at the Mirage Casino-Hotel, Las Vegas, Nevada. The company is exhibiting (booth #1227) at the conference April 28 – May 2.

Links:


Mestex Open Access Project Live View  (To access, Internet Explorer 10 or above is required. Enter “guest” as user name and password.)

 

# # #

 

Mestex (www.Mestex.com), a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, which include Applied Air, Alton, Aztec, Koldwave, Temprite and LJ Wing HVAC systems.

 

Media Contact:

Christina Divigard

Divigard & Associates

413 341 6780 or Christina@Divigard.com

 

 

NSF/Mestex Research Project Update at SEMI-THERM Conference in San Jose, California

Students from the University of Texas at Arlington will be presenting an update on the progress of a research consortium, partially funded by the National Science Foundation, that is focused on improving the efficiency of data center cooling. This presentation will be made during the SEMI-THERM conference in San Jose, California from March 9-13.

The work presented in this exhibit presents updates on this project since the last industrial advisory board (IAB) meeting in Villanova University in September 2013. The updates include completion of construction of an Aztec ASC-15 cooling unit, attachment of the cooling unit to an IT Pod, construction of internal details of the IT pod, construction of a duct for testing various cooling pads, creation of computational fluid dynamics (CFD) model for the IT pod and the ASC-15 unit.

The cooling unit, ASC-15, which is capable of operating in pure air-side economization, in direct evaporative cooling, in indirect evaporative cooling, and/or in hybrid modes, contains two blowers which can deliver up to 7000 CFM to the IT pod. Various parameters of the cooling unit, such as blower rotational speed, inlet air temperature, supply air temperature, outside air humidity, etc are available through an online portal. ASC-15 is connected to the IT pod at Mestex facility which is providing power and water to the modular research data center. Inside the IT pod, four cabinets, each containing thirty HP SE1102 servers, are placed in a hot/cold aisle configuration.

One of the HP SE1102 servers was tested in UT-Arlington lab to find out its maximum power consumption. The maximum measured energy consumption is used to calculate total dissipated heat per rack in the CFD model of the modular research data center. This CFD model will continue to be updated depending on changes to the IT pod or the cooling unit. For example, updates to the cooling pad model will be applied based on results from the various wet cooling pad tests that will be performed at UT-Arlington.

"Open Access Project" Update

Mestex is continuing to refine the systems and information stream as part of our "Open Access Project".  As a reminder, Mestex has established a research data pod as a member of an NSF project to improve the efficiency of data centers.  This pod is being cooled with an Aztec IDEC system managed by the Mestex DDC control system. 

The goals of the research project require frequent experiments.  Currently, Mestex is continuing to complete the basic infrastructure of the pod itself.  The video in the link below shows a quick walk-through update on the installation of a containment system.  As you can see, the installation is nearing completion.  In the meantime we are running the 51 servers, a 5 kw resistive load, and some ancillary equipment in the 4 cabinets that are in the pod.  https://onedrive.live.com/redir?resid=B173953F824409BC!7016&authkey=!AKcxEKWl7409R-8&ithint=video%2c.MP4

Since the IT equipment is operating while we finish the containment installation we are able to watch the performance of the IDEC cooling system in real time via  http://webctrl.aztec-server-cooling.com/.  This website is accessible to anyone and viewers can log on using the username "guest" and the password "guest".

One of the areas that concerns data center designers and operators is the performance of evaporative cooling systems "on the shoulder" when ambient dry bulb temperatures are relatively low and ambient RH is relatively high.  Last week we had an opportunity to observe just that situation.

The time was 7:55 in the morning.  The ambient conditions were what most people would consider to be the worst scenario for evaporative cooling…relatively low DB and relatively high RH…in this case, 59.6 DB and 89% RH.  Under those conditions most people would expect the cold aisle conditions to be either too warm or too humid.

However the cold aisle was operating at 78.9 DB and 56.7% RH.  The cold aisle setpoint is 80 DB.  The cooling tower integrated into the Aztec IDEC unit was on, the airflow dampers were positioned for 100% return from the hot aisle, and the system was providing enough cooling (even with this high RH) to maintain the cold aisle temp and an 11 to 12 degree rise across the servers.  Note also that these conditions are inside ASHRAE TC 9.9 A1 Allowable limits.
 
The "Open Access Project" will be continuing throughout 2014 and, most likely, into 2015.  This will provide ample opportunity to observe a wide range of operating environments for the IDEC system.  During that time there will also be research on filtration, fresh air cooling, and further refinement of control algorithms for fully integrated IDEC systems such as the Aztec product.
 

Indirect Evaporative Cooling Research Project Launched

Aztec ASC 3-D Model for CFD Research

ASME Paper Documents CFD Modeling of Aztec IDEC System

The 2013 ASME “International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems”, aka ”InterPACK 2013” is just concluding in San Francisco.  As the conference title implies there are papers and presentations from all over the planet that are focused on research into improving electronics, computers, and data centers.

One of those papers presents results from an on-going research project that Mestex has started with the College of Engineering at the University of Texas at Arlington.  This research project will likely go on for a couple of years and this paper presents some of the first findings that are being used to establish a “baseline” for the rest of the research.

The paper is listed in the proceedings as “InterPACK2013-73302”.  The human-readable title is “CFD MODELING OF INDIRECT/DIRECT EVAPORATIVE COOLING UNIT FOR MODULAR DATA CENTER APPLICATIONS” and the paper covers exactly what the title suggests.  The IDEC product that the paper covers is the Aztec ASC-20 and the goal is to establish that the factory data that we present in our literature can be validated against a detailed CFD model of the product.

By modeling the Aztec ASC-20 components and creating the 3-D CFD model using factory dimensional drawings the researcher was able to confirm that the published factory data is accurate and the ASC-20 will perform as predicted based upon the operating parameters.  This important result can be used to further our research into optimizing the performance of evaporative cooling and fresh air cooling solutions for mission critical/data center applications.  A full scale modular data center mock up is being installed at the Mestex facility and additional documentation and validation of the performance will be conducted over the next several months.  The CFD baseline model will be used to simulate filter performance and airflow changes prior to making the physical changes to the research module.

The Aztec, and Alton DEC, evaporative cooling products have been used in the industrial and large commercial market since 1946.  Over those 66 years the products have been refined and optimized.  This research project will take the product lines to another level of thermal performance, water use optimization, and control software optimization with a specific emphasis on the needs of the mission critical market.

Dusting Off Your Data

CONTAMINANTS IN THE DATA CENTER

Time to get back on my soapbox again…this time it is about “contaminants” in data centers as an excuse to avoid using fresh air cooling or having outside air enter the white space.  The bottom line is that unless your data center is located in “an emerging country” then the odds of a contaminant-created hardware failure in anything like a short time frame are about the same as winning the lottery…assuming you take some pretty basic steps in the design.

Contaminant control, or more correctly, concern over contaminant control has been around for decades.  I remember doing some research over 25 years ago on the impact of ozone on telecommunications equipment.  Bell Labs, as it was known long ago, had performed some pretty interesting tests to document what could be a very real problem under the right circumstances.  The results of those tests indicated that, with the exception of certain locations, the air in the equipment room was worse than the air outside so it made more sense to flush the room with outside air than to avoid bringing outside air into the space.

Particle and gaseous contaminants CAN be a problem if ignored.  However, the extent of the problem and how quickly it manifests itself needs to be considered. 

Phenomenon like copper creep and circuit bridging do occur…but only when the conditions at the server are right to support those failure modes.  Two things generally need to be in place for the failure mode to even begin.  First, there needs to be a fine coating of dust particles on the circuit boards.  Second, the relative humidity at the board needs to be at the deliquescent RH…or the point where the dust starts to absorb moisture and become “wet”.  If the RH is too low then dust might affect localized temperatures on the board but the mechanism to cause bridging simply does not exist.  The converse is also true…no dust…then no mechanism even with a relatively high humidity level.

Dust can come from anywhere.  Every time someone enters the data center they bring in some amount of dust particles.  Every time a box is opened in the data center particles are created.  And, yes, every time outside air is brought into the data center it is possible that dust can enter.  In fact, a data center with no outside air is actually vulnerable to the worst kind of dust intrusion…uncontrolled infiltration through doors, cracks, pipe openings, or wind pressure.  Maintaining a positive pressure in the white space helps to prevent infiltration and keeps the worst dust (and gases) out of the data center. 

ASHRAE, through the TC 9.9 committee, has set a target for data center “cleanliness”.  It is ISO Class 8.  ASHRAE has also noted that ISO Class 8 conditions can be met with a MERV 8 filter…a common and inexpensive filter available at virtually any HVAC parts house.  If the air being filtered is coming from the outside then ASHRAE recommends a MERV 11 or MERV 13 filter.  These might not be quite as common as the MERV 8 but they are also readily available and can fit in a standard 2” filter rack.

The interesting side note about the ASHRAE recommendations is just how extremely conservative they are.  ASHRAE recommends no more than 15mg/m3 of “fine” particles…defined as particles less than 2.5mm in size.  However, IBM (who should know something about computers) has a limit of 150mg/m3 and a “fine” particle definition of particles less than 5mm in size.

Once again, owners are being led down a path to purchase cooling systems and equipment that fail to optimize their energy savings through an inflated fear of something that happens very rarely in the developed world and is easily controlled with proper filtration.  Products such as our Aztec ASC indirect evaporative cooling systems are designed with MERV 14 filters in mind and can actually accept MERV 16 filters…the highest MERV rating point…that is suitable for operating rooms and can remove all bacteria and most tobacco smoke.  This allows the Aztec system to optimize the use of fresh air cooling and use a more efficient heat transfer system than air-to-air heat exchanger systems…and still exceed the extremely conservative ASHRAE recommendations for particulate control.

How to Save Almost $100,000 Per Year In Your 1 Megawatt Data Center


Over the last few weeks while I have been traveling there have been some interesting bits of information released in the mission critical world.
For example, Dell introduced their 12th generation PowerEdge servers.  This generation of servers is warranted to handle temperature excursions up to 45 degrees C, or 113 degrees F, for up to 90 hours per year.  One of Dell’s rationales behind marketing the server at those conditions was to allow fresh air cooling in virtually the entire continental US.  Other research by Dell has indicated that their servers can operate 87% of the year in Washington, DC using fresh air cooling alone.

The energy saving potential of raising the inlet temperatures that high can be enormous.  Instead of running chillers or compressors 8,760 hours a year they are only operating 1,138 hours per year. 
To put that into numbers is difficult but let’s try a little example.

If the PowerEdge server power consumption is 300 watts then the cooling system must remove 300 watts times 8,760 hours per year or 2,628 kwh of heat (8,961,480 btu).  That can either be accomplished using mechanical cooling or fresh air cooling or a combination of the two.
A pretty efficient HVAC system will remove about 4.5 watts of heat per watt of electrical energy used.  So to cool that PowerEdge server using mechanical cooling will require 2,628,000 watts of heat divided by 4.5, or 584 kwh of compressor power.

To cool that same server using fresh air for 87% of the year will only require 75.8 kwh of compressor power.  Of course, the fan energy stays the same in both cases but the compressor savings of 508.2 kwh PER SERVER can really start to add up.  At an aggressive electric rate of 4.5 cents/kwh that amounts to $22.87 PER SERVER PER YEAR.
At modest densities of, say, 40 servers per rack the savings amounts to $915 PER RACK PER YEAR.  Now consider how many racks are in the typical server room or data center.  If the data center has a server load of 1 megawatt then a density of forty, 300 watt, servers per rack will translate into 83 racks.  So the annual savings would be almost $76,000 in this example.

To make the savings even greater the same HVAC unit that provides the fresh air could also provide indirect evaporative cooling and completely eliminate the compressor-based cooling…adding another $3.50 PER SERVER PER YEAR of savings.  That would add another $11,620 PER YEAR in savings.

Availability

In the data center world there are several metrics related to “up time”.  You hear terms like SLA (“Service Level Agreement”) that define how many hours out of 100,000 that the servers are guaranteed to be up and running.  Data center people like to talk about SLAs that are “4-9s” or “5-9s”.  “4-9s” would be 0.9999…or the servers will be up 99.99% of the time.

There are a couple of other metrics that are directly related to the cooling equipment.  One is MTBF or “Mean Time Between Failures”.  Another is MTTR or “Mean Time to Repair”.  A third metric is the most meaningful and it is “Availability”.  “Availability” is a measure of how many hours out of 100,000 that the system will be available when you consider MTBF, MTTR, and routine service.
The formula for Availability is:    MTBF/(MTBF+MTTR)

When evaluating the Aztec indirect evaporative cooling unit and its components for a recent data center project; using MTBF and MTTR values from the Aztec Engineering Department, Technical Service Department, and Production Departments; the following “Availability” numbers can be derived:
  • For routine maintenance of evaporative media the “Availability” is 0.999871
  • For the MTBF for the pumps and motors the “Availability” is 0.9999333 to 0.9999555
Since the routine maintenance “Availability” is one that can be planned in a way that will not disrupt the overall “Availability” of an N+1 ,or better, facility it really doesn’t matter that it is only “3-9s”.  In the cases where a failure might occur (MTBF cases) the typical Aztec product is “4-9s” across the board.

The Aztec indirect evaporative cooling unit can achieve these high levels of "availability" due to the inherent simplicity of a typical evaporative cooling system.  Fans and pumps are considered to be the only significant components in an evaporative cooling system that can fail.  In the case of the Aztec product these components are selected for an expected life of 200,000 hours...probably far longer than the building itself will be used for its original purpose.

A final consideration that was reviewed during this analysis was the skill level required for each repair or maintenance task.  Although this factor cannot be included in a typical metric such as MTTR it is an important factor for the building owner to consider.  Since an evaporative cooling unit such as the Aztec unit contains no refrigerants the vast majority of tasks can be accomplished by what would traditionally be called facilities maintenance personnel.  No special licensing would be required.  It actually turns out that some of the smallest elements of the system are the only ones that might require a licensed service technician.  Replacing contactors and relays in the unit control and power circuits would most likely require a licensed electrical service technician.

In general the "availability" of an evaporative cooling system, such as the Aztec system, will be at least as high as any competing technology and, likely, higher.


Preaching to the Choir

Electrical Power Meters Keep Spinning
I have had a busy few weeks traveling to meetings and visiting with owners, operators, engineers, and researchers.  This has given me an interesting perspective and awareness of an issue that our industry needs to address.  My awareness of this issue was increased by an editorial in Mission Critical Magazine that bemoaned the lack of progress in data center design due to secrecy regarding "best practices".

I came away from all of those meetings with the sense that there are many very smart people who know how to design more efficient solutions to energy use in mission critical applications.  "Best practices" can be described by experts from the largest server manufacturers, global data center developers/operators, and from academia.  The issue is that we are all sitting around a large table in a closed meeting room and sharing that knowledge with others who already have a pretty good idea what to do.  We are "preaching to the choir".

The result is that the vast majority of data centers, server rooms, and telecom facilities are operating in very inefficient ways.  While a Microsoft might be able to design a data center with a 1.2 PUE the rest of the world is struggling to reach a 2.0. 

This came out in a technical committee meeting at ASHRAE's mid-year meeting a few days ago.  A comment was made by a server cooling system manufacturer that he finds it very difficult to convince smaller users to adopt the latest operating standards that could save the user tens of thousand of dollars a year in energy costs.  This sentiment was echoed by several around the room and pointed to how difficult it has been to educate the broader public on the reliability of modern equipment in warmer rooms.

And when I say "broader public" I mean just that.  The mechanical design director for a global retail data center operator told me that he knows his equipment will run just fine at 78 or 80 degree F inlet temperatures but his customers have not gotten the message and demand a "cold" room.  It seems that until corporate IT managers and executives understand all of this we will continue to see skyrocketing energy use by data centers.  Even small server rooms could benefit from elevated temperatures if key elements of "best practices" were implemented.  So called "legacy" data centers might be difficult to retrofit but they can certainly be upgraded with the basic elements of "best practices"...if only the occupants understood what is possible.

The industry has a massive educational challenge if it is to stem the rising cost and consumption of energy.  And the education cannot come soon enough because the projections are that server power densities will continue to climb and data storage power densities will climb even faster.  Today we talk about 300 watt per square foot densities but systems are being designed already that push almost 10 times that density.  It may seem that we have an endless supply of power from the grid but there are only so many power plants around the world and building a new one takes a decade or longer...data power consumption grows at a much faster rate and will stress grids around the world eventually if we cannot educate the "broader public" more effectively.

"Make everything as simple as possible, but not simpler."

I have addressed this topic before but it bears discussing again.  I was reading an article in a high tech blog the other day and they repeated the oft quoted "rule" of good design from Albert Einstein..."make everything as simple as possible, but not simpler."...  A few months ago I also quoted an engineer who reminded me that a system is not "sustainable" if it is not "maintainable".

It seems that in spite of these two pieces of advice, and numerous studies that highlight efficiency degradation when equipment is not properly maintained, we continue to see elaborate custom cooling solutions when a simple "off the shelf" product will accomplish the same thing...and has a better chance of staying that way.

As an industry we bemoan the lack of qualified service technicians and then we turn around and send them to jobsites populated with unique, one of a kind, complicated HVAC solutions.  What are we thinking?

I will admit that there are some cases that are so difficult to solve that something special is truly needed.  Critical human medical care might apply.  Some very high tech product production might apply.  Production of pharmaceuticals might apply.  But most server rooms and data centers no longer seem to apply.  ASHRAE and the server manufacturers themselves have said that the old ways no longer apply.  IT equipment can stand much higher temperatures and humidities than previously thought and much broader swings of those measures than ever before.  So why design around complex custom equipment?

As a manufacturer we know, and can pretty accurately predict, how a standard piece of equipment will perform in any given situation.  As soon as we are asked to "change it just a little"...which normally actually means throwing out the original design and starting over...then all bets are off.  We can use the same standard of components that we would normally use with an expectation of similar performance but, in reality, we no longer know exactly what to expect.

And then there is the issue of compliance with the myriad of agency and code safety tests that all manufacturers must apply to their equipment.  Standard equipment is designed, tested, and certified to meet those standards...custom equipment is designed to the standards but is probably not tested and certified to the standards.

And finally we have the issue of maintainability.  Service technicians are trained to work on specific types of equipment.  Many types of standard equipment require licensed technicians for service.  Given the broad range of equipment types in the market today it would be extremely rare to find a service technician who could be proficient on all standard equipment....much less something he or she has never seen before.

The topic of "total cost of ownership" is starting to pop again in some publications.  It is reassuring to see that some people are starting to go back to considering something beyond the initial capital expense...but operating expenses consist of more than just energy costs...remember the cost of maintaining the mechanical system in the long run so that the money spent up front for an efficient solution does not go out the window a couple of years down the road.

Aztec Evaporative Cooling Solutions and the University of Texas at Arlington

On April 12, 2012 management and engineering representatives of Aztec, part of Mestex division of Mestek family of products, met with engineering representatives from the University of Texas at Arlington to define how resources will be jointly applied to researching and developing an advanced technology indirect evaporative cooling solution for data centers. Research already started at both organizations will be shared in order to more quickly advance the development of a viable solution to the high rate of energy and water consumption by data centers.

The joint project will be part of the NSF-I/UCRC program. This program, initiated by the National Science Foundation, is a collaboration between 5 major universities and a select group of industry contributors. The stated purpose of the program is to develop commercially viable solutions that will improve energy efficiency in data centers. Research projects range from chip level solutions all the way to complete, large scale, data center solutions. Aztec will be contributing special knowledge of evaporative cooling and outside air cooling solutions that has been developed over 40 years of product development and marketing.

Aztec Evaporative Cooling Solutions attends Data Center World Expo

Data Center World Expo EntranceAztec Evaporative Cooling Solutions, a division of Mestek produced in the Mestex (Dallas) facility, was present at the recent Data Center World Expo in Las Vegas.  Aztec was showing an example of the ASC indirect evaporative cooling solution for data centers, server rooms, telecomm facilities, or IT product research labs.  The unit on display highlighted the system's integrated cooling tower technology, integrated DDC control system with multiple sensor options and BacNet or IP access, isolated direct drive plenum fan assembly designed for up to 200,000 hours of operation, and variable frequency drives for both cooling tower and supply fans. 
 
Aztec BoothThe show was attended by IT facility professionals from all over north America and Mexico.  Roughly 80 vendors were displaying an array of infrastructure products for data centers ranging from power distribution systems to cooling equipment.  Also visible in many booths were thermal and power simulation tools and DCIM software. 

The Aztec ASC system was the only factory assembled and tested evaporative cooling option for data centers that was on display.  Designed for long life, and with a history of extremely low failure rates, the Aztec ASC was considered to be a viable solution for many of the attendees.

Green Grid Updates Free Cooling Maps for Data Centers

The Green Grid has released White Paper #46 as an update to their "free cooling" maps for data center design and operation.  The research was edited by Emerson Network Power, Intel, and Schneider Electric. 

The reason for this update to the "free cooling" maps was the latest changes to the ASHRAE TC 9.9 operating/design guidelines for data centers.  For those who have not yet seen those new guidelines they allow a much larger operating range for data centers and server rooms that use some of the latest equipment from companies like Dell and HP.

For those of us who are "metric challenged" 40 degrees C = 104 degrees F and 35 degrees C = 95 degrees F.

When you consider that many data center operators still seem to want their rooms at 70 degrees or lower it is clear that these new criteria are a massive change in operation and design concepts.  It is also clear that adopting the newest guidelines can result in enormous energy savings.

The Green Grid paper includes a couple of maps to quickly illustrate how extensive the potential for "free cooling" has become under the latest operating/design guidelines. In these maps the darker the blue color the more hours that "free cooling" could be employed.  The darkest color blue indicates that all 8760 hours are suitable for "free cooling".  The maps also consider the coincident dewpoint temperatures as that metric is important also.

This first map is for ASHRAE Class A3 environments and shows that virtually all of North America could have their data centers cooled without using chillers or compressors.  The second map is for ASHRAE Class A2 environments and shows that roughly 80% of North America could still be cooled most of the year with no chillers or compressors.

The question for data center operators and designers who want to implement these new temperatures is what to do about those 500 or 1,000 hours when the outside air conditions are not quite right.

It is still quite possible to operate the center with no compressors or chillers if the designer will incorporate an evaporative cooling system such as the Aztec indirect evaporative cooling system or even the Alton direct evaporative cooling system.

Since evaporative cooling systems operate using 100% outside air all the time they make an excellent "hybrid" approach.  During the many hours of the year when "free cooling" will satisfy the conditions either type of evaporative cooling system will provide cool, filtered, outside air.  The Aztec indirect evaporative cooling system has the added advantage of allowing recirculation of hot aisle air during the very coldest months when "free cooling" could actually over-cool the data center.

During those few hours of the year, however, when it is simply too warm for "free cooling" to work, the Aztec or Alton systems can automatically initiate their evaporative cooling cycles and trim the outside air temperatures down to levels that fall well within the new ASHRAE guidelines...again, with no compressor or chiller energy required.  The air leaving the evaporative cooling system will usually be about 3 degrees F higher than the wet bulb temperature.  This chart should give you an idea of the potential air temperature that an evaporative cooling system can provide.

The Green Grid whitepaper is just the latest in a growing number of research papers and documents that point operators and designers in a direction that can save tens of thousands of dollars and kwh if they are willing to make the investment in the latest technologies from both the IT equipment manufacturers and the HVAC equipment manufacturers.