Showing posts with label Aztec. Show all posts
Showing posts with label Aztec. Show all posts

Mestex Data Center Research Published

One of the initiatives within Mestex is our collaboration with the University of Texas at Arlington ("UTA") and the National Science Foundation ("NSF") Industry University Cooperative Research Centers ("I/UCRC").  Our area of particular interest and research support is related to reducing the energy consumption of data centers via the use of outside air and evaporative (adiabatic) cooling solutions.  Most of this research stays hidden behind closed doors but when the research results in findings that can significantly aid the mission critical industry those results are published.

Over the last several years, Mestex has hosted a small data pod at our facility in Dallas, Texas.  That data pod has been cooled by an "off the shelf" Mestex product with the only variations from the standard catalog item being the ability to add special filters (one of the research areas regarding particulates) and the control software designed by Mestex specifically for data centers.  We recently received the following email from Dr. Dereje Agonafer, Presidential Distinguished Professor at the University of Texas at Arlington.


Congratulations Dr. Shah and Dr. Awe and the Mestex team on getting this paper published in a journal.



We continue to be proud of our working relationship with Mestex – resulted in significant research implemented in product applications as well as archival publications. In 2016, our joint work was featured in: “Breakthroughs from NSF I/UCRCs appeared on the 6th edition of the Compendium, published in a printed book and online, and is intended for Congressional and White House staffers and visitors to the NSF, and for members of the general public to help them realize the impacts of research taking place within I/UCRCs. Our joint work with Mestex was featured in the book – reference below:



2016 Compendium, Successful Industry-Nominated Technological Breakthroughs for NSF I/UCRCs in “More Efficient Data Centers: Maximizing Airside Cooling,” p. 111-112, http://faculty.washington.edu/scottcs/NSF/2016/NSF-book-2016-Final.pdf

The research documented in this paper helps to show data center industry designers and operators the benefits of Airside Cooling for their centers.  Further research findings were just published in the ASME Journal of Electronic Packaging.  That research, a collaboration between Mestex, UTA, and IBM focused on the reliability of cooling data center electronic equipment with outside air in a somewhat dirty environment.  On going research between Mestex and UTA is now focusing on filter performance.

ASME Paper Documents Reliable Data Center Operation With Outside Air and Evaporative Cooling

One of the longest running and most debated topics regarding data center operation is whether or not you can reliably cool a modern data center using outside air alone or with supplemental evaporative cooling.  If you can successfully operate a data center without using any compressorized equipment there are obviously huge energy and money savings.

The two key factors that hold operators back from implementing an obvious saving strategy is fear of equipment failures due to temperature/humidity excursions and due to airborne contaminants.  While server manufacturers publish data in their specification sheets that clearly indicate that their equipment can tolerate a wide range of temperature and humidity there is not much information regarding the impact of particulates and other contaminants.  ASHRAE has recognized the robustness of modern IT equipment by expanding the recommended and allowable temperature and humidity ranges in their widely followed data center design guidelines.  Very little is said regarding air quality other than a recommendation to use at least a MERV 8 filter system.

Over the last 5 years the Mestex division of Mestek has hosted a National Science Foundation research site at their manufacturing facility in Dallas.  This site is part of an Industry/University Cooperative Research Center with principal research from the Mechanical and Aerospace Engineering Department at the University of Texas at Arlington.  A fully instrumented "data pod" has been operating using a commercially available indirect/direct evaporative cooling system from Mestex that can also operate in 100% fresh air mode.  In addition to the dozen sensors normally included with the Aztec brand IDEC system from Mestex the "pod" includes an array of 64 sensors located on the front and rear of the four server racks.  Data has been streamed from all sensors every 15 seconds for the last 4 years.  In addition to this detailed tracking of temperature and humidity conditions there have been a number of studies conducted using copper and silver coupons to evaluate the corrosion potential of operating using outside air and evaporative cooling.  Keep in mind that this application is in Dallas, Texas...a relatively hot/humid climate area.  In addition, because the "pod" is installed between two manufacturing buildings in an industrial zone near downtown Dallas the measured air quality around the "pod" is classified as G2, or moderately harmful to PCBs.

The June, 2017 Volume 139 edition of the ASME Journal of Electronic Packaging includes a paper presenting the results of the last 4 years of the research at this site.  The paper entitled "Qualitative Study of Cumulative Corrosion Damage of Information Technology Equipment in a Data Center Utilizing Air-Side Economizer Operating in Recommended and Expanded ASHRAE Envelope" provides a comprehensive look at the impact of operating a data center in a "real world" application.

The most interesting point presented in the summary section of the paper is that, in spite of the servers installed in this test site already being several years old, there has not been a single server failure in the entire four years of operation.  The ability to dramatically reduce the cost of operating a data center...without unfounded concerns about reliability...is finally being proven true.

Cool New Technology for Mestex from AHR 2017

New Graphics Tool Allows Users to "See Inside" Mestex Products.


As I mentioned in my last blog entry Mestex participated in the 2017 AHR Expo last week in Vegas.  As usual we were part of the much larger Mestek corporate display showing the industry just how broad our company's product offering can be.

One challenge with such a broad offering is that it can be difficult to explain to potential customers, and even our own reps and employees, how much of the equipment actually works.  This is true even within the Mestex division as our products cover everything from air handlers to fully packaged DOAS units, and from advanced evaporative cooling systems to steam integral face and bypass coils.

In the photo to the right you can see a demonstration of the latest corporate sales tool that helps cut through some of the mystery.

The Mestek Technologies group has developed an interactive, graphical software app that reps can use to help illustrate some of the more complicated products from the Mestex division.  This app allows the user to select from the Aztec Indirect/Direct evap system, the IFL air handling system, or the FAP packaged DOAS-capable rooftop unit.  Once selected a screen opens that shows the user the elements of the unit that can be selected for configuring a product to meet their needs.  Touching any of the components, or the complete unit illustration, will open that component and by using the "pinch-zoom" function of touchscreen devices the user can "open up" the product and drill down to detailed images of the unit.  Buttons to the right allow the user to "turn on" heating, cooling, dampers, etc and watch how the airflow in the unit changes.

Using the app at the show pointed out how clear our configurable product concept became to viewers.  It was immediately obvious that the products shown had great flexibility and adaptability to suit their application.

The app is now available on the Google Play store for download to compatible devices by searching for "Mestek" and then looking for the "Mestex" icon in that storefront.  A version for iOS devices will be coming from the Apple store in the near future.

While this app proved to be an exciting tool to the reps that saw it there is much more to expect from the app going forward.  New products will be added.  Links to technical and sales literature will be added.  Embedded videos will be added.  Integrated CFD models will be added. 

Basically, this will become the most powerful "catalog" available for Mestex products.

On The Road Again

To quote one of my favorite musicians, Willie Nelson, we are going ..."On the road again, just can't wait to get on the road again..." this time to Las Vegas for the 2017 AHR Expo


The Mestex division of Mestek will be sharing booth space with our sister companies in booth C1525 in the Las Vegas Convention Center.  Some people are predicting a record turnout of attendees and we expect a busy few days.

This year, Mestex will be using some new (for us) graphical display technology from the Mestek Technology group to help explain some of our newer product offerings.  Our division companies; Applied Air, Aztec, Alton, LJ Wing, Temprite, and King provide solutions to temperature, pressure, airflow, and filtration problems that can be hard to explain using a static piece of equipment.  This graphical display technology will allow us to "walk you in" to three of our products and highlight how certain elements of the products can be used to address your building or process issues.

In addition to these graphics the Mestex people in the booth can explain how our in house CFD analysis services can help optimize a solution.  Projects ranging from large e-commerce warehouses and distribution centers to data centers to "indoor agriculture" grow rooms can be very difficult to design due to high internal thermal loads, humidity levels, stratification, or pressure gradients and CFD allows Mestex to thoroughly analyze and sort out possible solutions.

So come on by the booth and, at least, say "hi".  We would love to discuss how we might help solve your application problems.

"We Have A Failure To Communicate"

For the last 15 years the Mestex division of Mestek has been building direct digital controls ("DDC") into our equipment.  We started with some pretty simple control programs on some of our more basic units.  Even these simple programs allowed the equipment to operate more effectively...controlling temperature more closely, controlling energy consumption better, and giving users more options for scheduling their system operation.

The core functions of our control systems have not changed much over the years but the features that have been added, and are continuing to be added, to improve the information available from our equipment are almost mind-boggling. 

Take the relatively simple technology of evaporative cooling.  The Mestex Aztec indirect-direct evaporative cooling unit comes standard with a DDC control package that constantly monitors outside air conditions, unit supply air conditions, unit water quality, and cooled space conditions in order to control temperature, pressure, and humidity in the space.  But that is only part of the story.

While collecting all of the data we just described and deciding how to control the unit functions the DDC processor is also collecting, and making available, a wealth of other information.  The unit can provide real time electrical power consumption and demand, real time water consumption, and constantly updated information about the operating mode of the equipment (operating hours in full economizer mode, operating hours in full recirculation mode, and operating hours in mixed mode).  The unit is also accumulating and can display daily, monthly, and annual power and water use data.

This is obviously some pretty sophisticated information from a relatively simple machine.  The same algorithms used in this unit can be applied to most of the other Mestex products and provide a wealth of management information to end users.  The larger the end user organization and the more units deployed, the more valuable this information becomes. 

But...the information is only valuable if management can actually see it.  As more and more of our equipment installations are tied to building automation or building information systems we are encountering more and more interface issues.  The issues are not matters of communication protocols since our DDC packages are designed to speak virtually every communication language but issues of human communication protocols.

What we have found in many cases is that the information, and the interface to the equipment, is turned over to an IT person who is unfamiliar with HVAC equipment.  That person is probably also very concerned about network security and has probably created firewalls that make implementation challenging.  The HVAC equipment might be working perfectly but the person on the other end who is looking at data that he does not understand will frequently interpret that the equipment is not performing as required.

So, as is often the case in life, training and communication become essential to success.  As an HVAC company we usually expect the IT person to "simply understand".  I think, however, that we should work hard to learn about networking and IT issues so that we can at least speak the same language as the person on the other end of our equipment.

Relative Humidity – It’s all relative


A Guest Article by Jim Jagers

The other day I was conducting a training class, and we were discussing evaporative cooling. Someone said they didn’t think evaporative cooling would work very well in their area because the summer temperatures were 90°F plus with 90% RH. If you were to look at many psychrometric charts, you’d see this point is, dare I say it, “off the chart”. To get a feel for this consider a steam room has general temperature of 104°F and 100%RH. At 90°F with 90% RH the heat index is 122°F. It’s doubtful the temperature and humidity are as bad at the same time as he imagined.

People generally associate high temperatures with high humidity percentages. It’s more likely that high temperatures will be associated with lower humidity percentages. At 80°F and 41%RH the heat index is 80°F. 80 degrees feels like 80 degrees. At this point there is approximately 0.009 pounds of moisture per pound of dry air in the atmosphere. If the moisture content remained constant and the air warmed to say 90°F, the relative humidity would actually drop to about 30%. Conversely, if the moisture content remained constant and the temperature dropped to 70°F, the relative humidity would increase to about 57%. This is because cooler air can hold less moisture than warmer air, and relative humidity is the ratio of the moisture in the air compared to the amount of moisture the air (at a specific temperature) can hold expressed as a percentage.

People usually think of their air conditioner as providing cool dry air in the summer, and it does because it does both sensible and latent cooling. Sensible cooling lowers the temperature we sense, and latent cooling removes the moisture. The air entering the coil may be 78°F and have 0.0101lbs of moisture per pound of dry air. The coil temperature may be 45°F and thus the leaving air may be 60°F (It won’t be 45°F because the water in the air is absorbing some of the cold). At this point the leaving air may have a moisture content of 0.0062lbs per pound of dry air. This is a significant reduction in moisture, and it is evidenced you water dripping from the evaporator coil. The leaving air is much dryer than the entering air.

However, in relative terms the air coming off the evaporator coil in the air handler has a relative humidity of 100% or close to it. Remember, cool air can’t hold as much water as warm air. When the air entering the coil, contacts the cold fins it cools rapidly. Condensation occurs when air can’t hold the moisture it contains. At this point the air is fully saturated meaning its relative humidity is 100%.

The point to this brief essay is, as I said at the start, relative humidity is all relative - to the moisture in the air and the air temperature. Warm air isn’t necessarily humid; cool air isn’t necessarily dry, relatively speaking.

Mestex and the National Science Foundation Advisors Meet in Dallas

Although it has been far too long since I have posted to this blog due to my travel schedule I have some news to share.

Over the last few days (Oct. 1 & 2) we have been participating in the Industry Advisory Board ("IAB") meeting of the NSF-I/UCRC ES2 ("National Science Foundation-Industry/University Cooperative Research Centers Energy Smart Electronic Systems") research consortium.  That mouthful of letters represents a group of universities and companies whose expressed goal is to reduce the energy consumption of data centers by 20-35%.

The consortium is currently working on fourteen research projects and Mestex serves as an advisor ("mentor") on three of those projects.  Two of the projects that Mestex is mentoring cover research on evaporative and fresh air cooling of data centers and a second project on contaminants in data centers that use fresh air cooling.  As you might guess, the project on evaporative and fresh air cooling offers the greatest opportunity for the consortium to reach the stated goals.  In order to support that research, Mestex has installed a small data pod at it's facility in Dallas and is cooling that data pod with a commercially available Aztec ASC-5 unit.  The ASC-5 has built-in DDC controls that facilitate the use of multiple temperature and humidity sensors for control without any special modifications.  The controls also include a provision for pressure sensing control and that is also implemented in this case.

In addition to the data that is presented by the standard Aztec DDC controls there are additional thermocouples and sensors installed that are streaming data to researchers at the University of Texas at Arlington.

One of the most critical considerations that prevents many data center operators from reducing their energy consumption by huge amounts is the reluctance to introduce outside air to the facility.  The second Mestex project is focused on that research and we were fortunate to have the input of one of the world's experts on contamination control provide test coupons and laboratory analysis of the results.  Dr. Prabjit "PJ" Singh, of IBM, provides guidance and analysis to companies around the world and is a major source of information for the ASHRAE TC 9.9 committee on data center cooling.  Dr. Singh, Dr. Dereje Agonafer from the University of Texas at Arlington, and several members of the NSF IAB toured the Mestex facility at the conclusion of the meetings this week. 

Drs. Singh and Agonafer are shown here learning about the technology behind the patented "Digital High Turndown Burner" that was developed at Mestex.  Jim Jagers, Mestex Sales Manager,  conducted the tour and provided a "deep dive" into how this unique technology works before the group proceeded to the research data pod for additional discussions.



An "Open Access Project" Update


The Mestex "Open Access Project" continues to move forward so I thought I would provide a brief update on the current research activity and the plans for the next few months.

The installation at the Mestex facilities in Dallas has been brought up to the expected final configuration with a total of 120 servers, intelligent PDUs, and switches distributed over 4 cabinets.  We have separated the hot and cold aisles with a combination of a hard wall and flexible "curtains"...this has turned out to be one of the more important features of the installation.  The indirect/direct evaporative cooling system is fully functional although we have also found the need to increase the hot aisle exhaust pressure relief in order to reduce the "back pressure" in the hot aisle. 

In addition to the combination temperature and humidity sensors that are part of the standard Aztec control system, and used by the DDC control system to manage the operation of the Aztec unit, we have also installed 32, 10K thermistors.  These sensors are used to feed information to our data acquisition system that is running in the background collecting more granular detail about the system performance.  These sensors are located on the fronts and backs of the cabinets.

As I mentioned, we have spent some time resolving hot aisle/cold aisle separation issues.  Although the Aztec unit is monitoring cold aisle pressure and operating the supply fan to maintain a target positive pressure in the cold aisle we found that we still had hot aisle air migrating back into the cold aisle.  Over the last few days we have spent time filling small gaps and sealing around the cabinets more carefully and the results were immediately noticeable.  The cold aisle temperature was reduced by 5 to 6 degrees F. 

The other factor contributing to better separation was the reduction of the "back pressure" in the hot aisle.  We had addressed some of this earlier by removing the standard room exhaust grill and replacing it with a screen that had much greater free area.  While that made a measureable difference in server temperature rise we had simply moved the pressure issue from inside the data pod to the return air ductwork on the Aztec unit.  That has now been resolved by doubling the size of the pressure relief openings in the return ductwork.  Supply fan operation is now improved, server temperature rise is now on target, and supply fan motor power consumption has been reduced.  We monitor and report real time PUE for the pod and these changes have lowered the real time PUE to between 1.08 and 1.35, depending upon the system operating mode.

Now that we are beginning to see the kind of stable operation that we were anticipating we have started to plan the next phases of the research.

The Aztec unit is designed to operate in three modes, or some mixture of those modes, depending upon the sensor inputs.  The unit can operate in 100% fresh air cooling mode, in an indirect evaporative cooling mode, or in an indirect/direct evaporative cooling mode.  Each of those modes introduces characteristics that the data center industry wants to research. 

The next round of research will focus on two aspects of fresh air/evaporative cooling:

  • We will be installing coupons in the space to collect data on contaminants and their potential impact on the circuits in the servers.  This project is projected to run for at least 1 month and support is being provided by IBM.
  • Following the collection of this data (and possibly overlapping) we will be installing particle count measuring devices.  These devices will be installed upstream of the filters in the Aztec unit, downstream of the filters, within the cold aisle, and within the hot aisle.  The filter racks in the Aztec unit will allow us to evaluate filters of different MERV ratings and see how well they perform in a typical HVAC unit installation versus the controlled lab environment.

As you can tell, this site offers a unique opportunity for researchers to take their lab research findings and compare them to a real world application with real world equipment.  Mestex is pleased to be a part of this NSF sponsored research into data center cooling technologies.  We will be hosting a tour for the industry advisory board of the NSF-I/UCRC during their upcoming meeting at the University of Texas at Arlington.

NEWS RELEASE


New data center construction expected to boom as demand triples

 

Mestex Open Access Project helps data center operators plan for “build as you grow” expansion

 
DALLAS, April 28, 2014 The digital revolution is sapping the power grid, but a new approach to data center construction may help reverse the trend of ever-increasing energy consumption for powering and cooling these facilities. To help data center operators better understand their options, Mestex, the industry leader in evaporative cooling systems, is providing a free tool to demonstrate how infrastructure can be better deployed to manage competing demands for more capacity and greater energy efficiency.

 “Data centers are the enablers of this digital revolution,” said Mike Kaler, president of Mestex. “The increase in global digital demand and cloud computing is exponential. As demand rises, data centers that house digital information consume more electricity, half of it being used to cool the facility. We wanted to help people see how energy is being consumed and ways for managing infrastructure and costs.”

The company believes intelligent technology combined with a flexible, scalable and energy-saving approach is the best way to “build as you grow.” Adding plug-and-play cooling units – such as Mestex’s own Aztec Evaporative Cooling Units – as capacity increases is the most economical strategy for data centers to manage expansion or new construction while reducing total cost of ownership. Aztec systems are proven to lower power usage by 70% when compared to traditional air conditioning; the system’s digital controls, when integrated with other building automation systems, can extend that savings even further.

To help data center operators get a realistic picture of how their own expansion might play out, the company recently launched the Mestex Open Access Project to provide information technologists, facility managers and financial executives the ability to evaluate energy-saving concepts in a real-world environment. 

“We’ve opened access to our equipment, controls and data, because we want to encourage energy savings and demonstrate to data center decision makers that there are smart, effective ways to increase efficiency and optimize operations,” Kaler said.

The web-based interface offers visibility into the physical plant and air conditioning system of an operating data center being tested as a part of a project spearheaded by the National Science Foundation. The “open access” gives anyone with Internet access an unembellished look at how a data center is operating, in real time, 24/7.

The Open Access Project harnesses the power of Mestex’s direct digital control (DCC) system, which comes standard on all of its HVAC products and can be easily integrated with other HVAC vendors’ products and building automation systems to create an intelligent network that controls cooling for optimal efficiency, performance and longevity, as well as provides web-based system monitoring and management.

 

Note:

Mestex President Mike Kaler will be hosting a presentation on mission-critical cooling systems on Wednesday, April 30, at 10:30 a.m. at AFCOM Data Center World at the Mirage Casino-Hotel, Las Vegas, Nevada. The company is exhibiting (booth #1227) at the conference April 28 – May 2.

Links:


Mestex Open Access Project Live View  (To access, Internet Explorer 10 or above is required. Enter “guest” as user name and password.)

 

# # #

 

Mestex (www.Mestex.com), a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, which include Applied Air, Alton, Aztec, Koldwave, Temprite and LJ Wing HVAC systems.

 

Media Contact:

Christina Divigard

Divigard & Associates

413 341 6780 or Christina@Divigard.com

 

 

NSF/Mestex Research Project Update at SEMI-THERM Conference in San Jose, California

Students from the University of Texas at Arlington will be presenting an update on the progress of a research consortium, partially funded by the National Science Foundation, that is focused on improving the efficiency of data center cooling. This presentation will be made during the SEMI-THERM conference in San Jose, California from March 9-13.

The work presented in this exhibit presents updates on this project since the last industrial advisory board (IAB) meeting in Villanova University in September 2013. The updates include completion of construction of an Aztec ASC-15 cooling unit, attachment of the cooling unit to an IT Pod, construction of internal details of the IT pod, construction of a duct for testing various cooling pads, creation of computational fluid dynamics (CFD) model for the IT pod and the ASC-15 unit.

The cooling unit, ASC-15, which is capable of operating in pure air-side economization, in direct evaporative cooling, in indirect evaporative cooling, and/or in hybrid modes, contains two blowers which can deliver up to 7000 CFM to the IT pod. Various parameters of the cooling unit, such as blower rotational speed, inlet air temperature, supply air temperature, outside air humidity, etc are available through an online portal. ASC-15 is connected to the IT pod at Mestex facility which is providing power and water to the modular research data center. Inside the IT pod, four cabinets, each containing thirty HP SE1102 servers, are placed in a hot/cold aisle configuration.

One of the HP SE1102 servers was tested in UT-Arlington lab to find out its maximum power consumption. The maximum measured energy consumption is used to calculate total dissipated heat per rack in the CFD model of the modular research data center. This CFD model will continue to be updated depending on changes to the IT pod or the cooling unit. For example, updates to the cooling pad model will be applied based on results from the various wet cooling pad tests that will be performed at UT-Arlington.

"Open Access Project" Update

Mestex is continuing to refine the systems and information stream as part of our "Open Access Project".  As a reminder, Mestex has established a research data pod as a member of an NSF project to improve the efficiency of data centers.  This pod is being cooled with an Aztec IDEC system managed by the Mestex DDC control system. 

The goals of the research project require frequent experiments.  Currently, Mestex is continuing to complete the basic infrastructure of the pod itself.  The video in the link below shows a quick walk-through update on the installation of a containment system.  As you can see, the installation is nearing completion.  In the meantime we are running the 51 servers, a 5 kw resistive load, and some ancillary equipment in the 4 cabinets that are in the pod.  https://onedrive.live.com/redir?resid=B173953F824409BC!7016&authkey=!AKcxEKWl7409R-8&ithint=video%2c.MP4

Since the IT equipment is operating while we finish the containment installation we are able to watch the performance of the IDEC cooling system in real time via  http://webctrl.aztec-server-cooling.com/.  This website is accessible to anyone and viewers can log on using the username "guest" and the password "guest".

One of the areas that concerns data center designers and operators is the performance of evaporative cooling systems "on the shoulder" when ambient dry bulb temperatures are relatively low and ambient RH is relatively high.  Last week we had an opportunity to observe just that situation.

The time was 7:55 in the morning.  The ambient conditions were what most people would consider to be the worst scenario for evaporative cooling…relatively low DB and relatively high RH…in this case, 59.6 DB and 89% RH.  Under those conditions most people would expect the cold aisle conditions to be either too warm or too humid.

However the cold aisle was operating at 78.9 DB and 56.7% RH.  The cold aisle setpoint is 80 DB.  The cooling tower integrated into the Aztec IDEC unit was on, the airflow dampers were positioned for 100% return from the hot aisle, and the system was providing enough cooling (even with this high RH) to maintain the cold aisle temp and an 11 to 12 degree rise across the servers.  Note also that these conditions are inside ASHRAE TC 9.9 A1 Allowable limits.
 
The "Open Access Project" will be continuing throughout 2014 and, most likely, into 2015.  This will provide ample opportunity to observe a wide range of operating environments for the IDEC system.  During that time there will also be research on filtration, fresh air cooling, and further refinement of control algorithms for fully integrated IDEC systems such as the Aztec product.
 

SHOWTIME!!!

I guess it must be that time of the year because the trade shows are starting to pick up steam.  The Dallas division of Mestek, Mestex, has several upcoming trade show events.

First up will be the annual AHR exhibit in New York from January 21 to 23.  Dallas will be exhibiting a demonstration model of the industry's most advanced indirect evaporative cooling system, the Aztec ASC.  This small scale model illustrates the use of high heat transfer copper tube/aluminum fin cooling coils, redundant direct drive plenum fans, and the industry leading DDC control system. You can see this display in booth 1503.


Next will be the annual Rental Show in Orlando from February 8 to 13.  For this trade show Dallas will be exhibiting our new line of Koldwave air-cooled portable air conditioners.  These units are perfect for rental companies.  You can meet with Koldwave sales manager, Jeff Wilson, in booth 1864.

Indirect Evaporative Cooling Research Project Launched

Aztec ASC 3-D Model for CFD Research

ASME Paper Documents CFD Modeling of Aztec IDEC System

The 2013 ASME “International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems”, aka ”InterPACK 2013” is just concluding in San Francisco.  As the conference title implies there are papers and presentations from all over the planet that are focused on research into improving electronics, computers, and data centers.

One of those papers presents results from an on-going research project that Mestex has started with the College of Engineering at the University of Texas at Arlington.  This research project will likely go on for a couple of years and this paper presents some of the first findings that are being used to establish a “baseline” for the rest of the research.

The paper is listed in the proceedings as “InterPACK2013-73302”.  The human-readable title is “CFD MODELING OF INDIRECT/DIRECT EVAPORATIVE COOLING UNIT FOR MODULAR DATA CENTER APPLICATIONS” and the paper covers exactly what the title suggests.  The IDEC product that the paper covers is the Aztec ASC-20 and the goal is to establish that the factory data that we present in our literature can be validated against a detailed CFD model of the product.

By modeling the Aztec ASC-20 components and creating the 3-D CFD model using factory dimensional drawings the researcher was able to confirm that the published factory data is accurate and the ASC-20 will perform as predicted based upon the operating parameters.  This important result can be used to further our research into optimizing the performance of evaporative cooling and fresh air cooling solutions for mission critical/data center applications.  A full scale modular data center mock up is being installed at the Mestex facility and additional documentation and validation of the performance will be conducted over the next several months.  The CFD baseline model will be used to simulate filter performance and airflow changes prior to making the physical changes to the research module.

The Aztec, and Alton DEC, evaporative cooling products have been used in the industrial and large commercial market since 1946.  Over those 66 years the products have been refined and optimized.  This research project will take the product lines to another level of thermal performance, water use optimization, and control software optimization with a specific emphasis on the needs of the mission critical market.

Politics and the Building Industry

Climate Change Initiative


Coal Fired Power Plants in Danger
I am not sure how many folks listened to President Obama's speech this week regarding climate change initiatives.  I know that I was not one.  However, I have read the document that served as the background for the speech and there are some things in this document that folks in the building design community and mission critical world, in particular, should pay attention to.  Those things could have a significant impact on the types of systems that we can design and implement in the coming years.

The theme of the speech and the document is primarily reduction of carbon emissions and increases in "renewable" sources of energy.  There are some other things in the document that are focused on electric generation infrastructure.  However there is a potentially ominous element to that topic that is related to the overarching goal of reducing carbon emissions. 

By means of a "Presidential Memorandum" Mr. Obama has instructed the EPA to accelerate transitioning power plants to "clean" energy sources, i.e. anything but coal.  As we have seen in some other cases in the HVAC industry as soon as the EPA has a mandate of that sort they move quickly to implement regulations that may, or may not, be carefully thought out for the old "unintended consequences" issue.

In my opinion the danger is rapidly removing significant generating capacity from the grid at a pace that cannot be matched on the construction side.  Even though the document also outlines a directive to speed up permitting of power plants it is still a fact that building a multi-megawatt power plant can take years.  With coal being the primary energy source for roughly 40% of US power plants you can see how a too quick implementation of rules that curtail their use can lead to problems.  Many states already operate on the edge of rolling blackouts and brownouts each summer so shutting down or limiting coal fired plants could get ugly.

Exacerbating this problem is the rapid and continuing growth of the data center market.  When these things come on line they gobble up megawatts of generating capacity in a single site...and they can come on line in a matter of months, not years.  Even if they never reach full utilization the power companies must be prepared to provide that power.  ASHRAE and others have tried, somewhat in vain, to communicate that these centers can operate without the heavy energy use of compressors or chillers.  As long as the local electric utility still has generating capacity that can be allocated to the data center this is OK...although not a very "sustainable" approach if you believe in that concept.  But, if that same utility now has to shut down 10 or 20 percent of its generating capacity then there may simply not be enough power to allow the luxury of overly cold air in the data center.

The implications for other building types are similar, although not nearly as extreme.  Systems that optimize the use of outside air as their primary cooling source augmented by smaller compressor or chiller plants could become the basis of design.  Concepts such as chilled beams that utilize higher chilled water temperatures and minimal fan power might need to migrate to smaller buildings than you see them in today.  And building shells will need to make more extensive use of passive and active shading systems.

So, once again, the building industry is going to be impacted by external forces that may have the best of intentions but that will also require rethinking of how we design and operate those buildings.

Dusting Off Your Data

CONTAMINANTS IN THE DATA CENTER

Time to get back on my soapbox again…this time it is about “contaminants” in data centers as an excuse to avoid using fresh air cooling or having outside air enter the white space.  The bottom line is that unless your data center is located in “an emerging country” then the odds of a contaminant-created hardware failure in anything like a short time frame are about the same as winning the lottery…assuming you take some pretty basic steps in the design.

Contaminant control, or more correctly, concern over contaminant control has been around for decades.  I remember doing some research over 25 years ago on the impact of ozone on telecommunications equipment.  Bell Labs, as it was known long ago, had performed some pretty interesting tests to document what could be a very real problem under the right circumstances.  The results of those tests indicated that, with the exception of certain locations, the air in the equipment room was worse than the air outside so it made more sense to flush the room with outside air than to avoid bringing outside air into the space.

Particle and gaseous contaminants CAN be a problem if ignored.  However, the extent of the problem and how quickly it manifests itself needs to be considered. 

Phenomenon like copper creep and circuit bridging do occur…but only when the conditions at the server are right to support those failure modes.  Two things generally need to be in place for the failure mode to even begin.  First, there needs to be a fine coating of dust particles on the circuit boards.  Second, the relative humidity at the board needs to be at the deliquescent RH…or the point where the dust starts to absorb moisture and become “wet”.  If the RH is too low then dust might affect localized temperatures on the board but the mechanism to cause bridging simply does not exist.  The converse is also true…no dust…then no mechanism even with a relatively high humidity level.

Dust can come from anywhere.  Every time someone enters the data center they bring in some amount of dust particles.  Every time a box is opened in the data center particles are created.  And, yes, every time outside air is brought into the data center it is possible that dust can enter.  In fact, a data center with no outside air is actually vulnerable to the worst kind of dust intrusion…uncontrolled infiltration through doors, cracks, pipe openings, or wind pressure.  Maintaining a positive pressure in the white space helps to prevent infiltration and keeps the worst dust (and gases) out of the data center. 

ASHRAE, through the TC 9.9 committee, has set a target for data center “cleanliness”.  It is ISO Class 8.  ASHRAE has also noted that ISO Class 8 conditions can be met with a MERV 8 filter…a common and inexpensive filter available at virtually any HVAC parts house.  If the air being filtered is coming from the outside then ASHRAE recommends a MERV 11 or MERV 13 filter.  These might not be quite as common as the MERV 8 but they are also readily available and can fit in a standard 2” filter rack.

The interesting side note about the ASHRAE recommendations is just how extremely conservative they are.  ASHRAE recommends no more than 15mg/m3 of “fine” particles…defined as particles less than 2.5mm in size.  However, IBM (who should know something about computers) has a limit of 150mg/m3 and a “fine” particle definition of particles less than 5mm in size.

Once again, owners are being led down a path to purchase cooling systems and equipment that fail to optimize their energy savings through an inflated fear of something that happens very rarely in the developed world and is easily controlled with proper filtration.  Products such as our Aztec ASC indirect evaporative cooling systems are designed with MERV 14 filters in mind and can actually accept MERV 16 filters…the highest MERV rating point…that is suitable for operating rooms and can remove all bacteria and most tobacco smoke.  This allows the Aztec system to optimize the use of fresh air cooling and use a more efficient heat transfer system than air-to-air heat exchanger systems…and still exceed the extremely conservative ASHRAE recommendations for particulate control.

Mestex Hosts Independent Representatives at ASHRAE

Mestex Representatives Attend ASHRAE

With the annual ASHRAE/AHR meetings and exhibits in Dallas for the first time in 6 years, Mestex took advantage of the opportunity to host over 100 independent Mestex reps at the Mestex facility. We were also joined by a number of Mestek corporate employees including Stewart Reed, Mestek CEO.



Mestex DDC Dashboard
Test Area Demonstration
The reps were provided with guided factory tours that included presentations at four key areas in the plant...the gas-fired products test area, the hydronic products test area, the "Mestex Mall" show unit area, and the top secret Mestex R&D area. In addition to highlighting the extensive final test processes that every Mestex product endures the tour also highlighted the latest version of the Mestex DDC control system with full web-enabled interface and user "information dashboard".


"Dallas" Based Theme
Following the tours the reps gathered in the stage area that was set up in the plant for formal presentations on new software technologies that Mestex is introducing in 2013, a more detailed look at the DDC "dashboard", and a glimpse into a huge new sales opportunity. The presentations wrapped up with the introduction of the 2013 Sales Incentive program. The overall formal presentations were introduced by Mestex personnel who played the parts of characters from the TV series "Dallas".
Mestek Booth at ASHRAE/AHR Show

Over the following three days, Mestex personnel hosted a number of engineer and customer visits to the facility and also attended the ASHRAE/AHR show as part of the large Mestek contingent.

 

How I Spent My Summer Vacation

It has been a while since I have posted anything to this blog...no, I was not on sabbatical on some desert island...I have been traveling around North America talking to consulting engineers, contractors, and data center owners and operators.  This posting just provides a few insights that I garnered over the last 2 months on the road.

First, the data center/mission critical market continues to occupy the minds and the design resources of many, many companies in the design community.  It is clear that this is a market segment that is vibrant and all indications are that it will continue to be for quite some time to come.  The latest issue of Datacenter Dynamics FOCUS indicated that the world is now consuming over 300 Tkwh annually to drive data centers, with the US consuming over 25 Tkwh alone.  The consumption in the US is projected to grow over 9% in 2013.  While this information points to a growing market it also points to the urgent need for improved operating efficiency in data centers.

Second, and related to the first item, is the lack of knowledge about new "best practices" in data center design.  I have talked to dozens of engineers, contractors, and data center people who are not aware of the latest design guidelines from ASHRAE.  In fairness, those guidelines were only officially announced a few weeks ago...but they have been rumored and discussed for about a year now.  I mentioned in one of my earlier posts that education of the design community is an important, and ongoing, task.  This has been reinforced to me over the last 2 months.

Third, for those engineers and contractors who understand and embrace the new standards, is the challenge of convincing the data center people to adopt those standards.  This is less of a problem at the top levels of the data center company than it is on the floor of the data center.  The IT equipment operators who live in "the white space" seem not to understand the allowable operating temperatures of the equipment that they manage every day.  I have heard many different reasons for their reluctance to adopt the new best practices but I think it comes down to fear.  Because of stringent SLAs the operators worry about losing any equipment for any period of time...even though there is mounting research that this fear is unfounded.

Fourth, I have heard of several cases where the local electric utility has started to put limits on the available service capacity for planned centers.  In the US we are so comfortable with the idea that our electric grid can provide unlimited power that we forget that is not true.  We have a fixed number of powerplants with only so much generating capacity.  With the tremendous growth of data centers, and data centers with 300 to 500 watt per square foot electrical demands, there is a limit to what a utility can do.  And timing is another element of the equation.  A data center can be built in a matter of months...a powerplant takes years.  So even when a utility sees the demand coming they cannot add capacity as quickly as the demand can be added.

So, these are a few observations from the last couple of months.  Of course there is more to the story and feel free to comment on this post with any questions you might have.  I will try to respond as quickly as possible.

How to Save Almost $100,000 Per Year In Your 1 Megawatt Data Center


Over the last few weeks while I have been traveling there have been some interesting bits of information released in the mission critical world.
For example, Dell introduced their 12th generation PowerEdge servers.  This generation of servers is warranted to handle temperature excursions up to 45 degrees C, or 113 degrees F, for up to 90 hours per year.  One of Dell’s rationales behind marketing the server at those conditions was to allow fresh air cooling in virtually the entire continental US.  Other research by Dell has indicated that their servers can operate 87% of the year in Washington, DC using fresh air cooling alone.

The energy saving potential of raising the inlet temperatures that high can be enormous.  Instead of running chillers or compressors 8,760 hours a year they are only operating 1,138 hours per year. 
To put that into numbers is difficult but let’s try a little example.

If the PowerEdge server power consumption is 300 watts then the cooling system must remove 300 watts times 8,760 hours per year or 2,628 kwh of heat (8,961,480 btu).  That can either be accomplished using mechanical cooling or fresh air cooling or a combination of the two.
A pretty efficient HVAC system will remove about 4.5 watts of heat per watt of electrical energy used.  So to cool that PowerEdge server using mechanical cooling will require 2,628,000 watts of heat divided by 4.5, or 584 kwh of compressor power.

To cool that same server using fresh air for 87% of the year will only require 75.8 kwh of compressor power.  Of course, the fan energy stays the same in both cases but the compressor savings of 508.2 kwh PER SERVER can really start to add up.  At an aggressive electric rate of 4.5 cents/kwh that amounts to $22.87 PER SERVER PER YEAR.
At modest densities of, say, 40 servers per rack the savings amounts to $915 PER RACK PER YEAR.  Now consider how many racks are in the typical server room or data center.  If the data center has a server load of 1 megawatt then a density of forty, 300 watt, servers per rack will translate into 83 racks.  So the annual savings would be almost $76,000 in this example.

To make the savings even greater the same HVAC unit that provides the fresh air could also provide indirect evaporative cooling and completely eliminate the compressor-based cooling…adding another $3.50 PER SERVER PER YEAR of savings.  That would add another $11,620 PER YEAR in savings.

Availability

In the data center world there are several metrics related to “up time”.  You hear terms like SLA (“Service Level Agreement”) that define how many hours out of 100,000 that the servers are guaranteed to be up and running.  Data center people like to talk about SLAs that are “4-9s” or “5-9s”.  “4-9s” would be 0.9999…or the servers will be up 99.99% of the time.

There are a couple of other metrics that are directly related to the cooling equipment.  One is MTBF or “Mean Time Between Failures”.  Another is MTTR or “Mean Time to Repair”.  A third metric is the most meaningful and it is “Availability”.  “Availability” is a measure of how many hours out of 100,000 that the system will be available when you consider MTBF, MTTR, and routine service.
The formula for Availability is:    MTBF/(MTBF+MTTR)

When evaluating the Aztec indirect evaporative cooling unit and its components for a recent data center project; using MTBF and MTTR values from the Aztec Engineering Department, Technical Service Department, and Production Departments; the following “Availability” numbers can be derived:
  • For routine maintenance of evaporative media the “Availability” is 0.999871
  • For the MTBF for the pumps and motors the “Availability” is 0.9999333 to 0.9999555
Since the routine maintenance “Availability” is one that can be planned in a way that will not disrupt the overall “Availability” of an N+1 ,or better, facility it really doesn’t matter that it is only “3-9s”.  In the cases where a failure might occur (MTBF cases) the typical Aztec product is “4-9s” across the board.

The Aztec indirect evaporative cooling unit can achieve these high levels of "availability" due to the inherent simplicity of a typical evaporative cooling system.  Fans and pumps are considered to be the only significant components in an evaporative cooling system that can fail.  In the case of the Aztec product these components are selected for an expected life of 200,000 hours...probably far longer than the building itself will be used for its original purpose.

A final consideration that was reviewed during this analysis was the skill level required for each repair or maintenance task.  Although this factor cannot be included in a typical metric such as MTTR it is an important factor for the building owner to consider.  Since an evaporative cooling unit such as the Aztec unit contains no refrigerants the vast majority of tasks can be accomplished by what would traditionally be called facilities maintenance personnel.  No special licensing would be required.  It actually turns out that some of the smallest elements of the system are the only ones that might require a licensed service technician.  Replacing contactors and relays in the unit control and power circuits would most likely require a licensed electrical service technician.

In general the "availability" of an evaporative cooling system, such as the Aztec system, will be at least as high as any competing technology and, likely, higher.


Preaching to the Choir

Electrical Power Meters Keep Spinning
I have had a busy few weeks traveling to meetings and visiting with owners, operators, engineers, and researchers.  This has given me an interesting perspective and awareness of an issue that our industry needs to address.  My awareness of this issue was increased by an editorial in Mission Critical Magazine that bemoaned the lack of progress in data center design due to secrecy regarding "best practices".

I came away from all of those meetings with the sense that there are many very smart people who know how to design more efficient solutions to energy use in mission critical applications.  "Best practices" can be described by experts from the largest server manufacturers, global data center developers/operators, and from academia.  The issue is that we are all sitting around a large table in a closed meeting room and sharing that knowledge with others who already have a pretty good idea what to do.  We are "preaching to the choir".

The result is that the vast majority of data centers, server rooms, and telecom facilities are operating in very inefficient ways.  While a Microsoft might be able to design a data center with a 1.2 PUE the rest of the world is struggling to reach a 2.0. 

This came out in a technical committee meeting at ASHRAE's mid-year meeting a few days ago.  A comment was made by a server cooling system manufacturer that he finds it very difficult to convince smaller users to adopt the latest operating standards that could save the user tens of thousand of dollars a year in energy costs.  This sentiment was echoed by several around the room and pointed to how difficult it has been to educate the broader public on the reliability of modern equipment in warmer rooms.

And when I say "broader public" I mean just that.  The mechanical design director for a global retail data center operator told me that he knows his equipment will run just fine at 78 or 80 degree F inlet temperatures but his customers have not gotten the message and demand a "cold" room.  It seems that until corporate IT managers and executives understand all of this we will continue to see skyrocketing energy use by data centers.  Even small server rooms could benefit from elevated temperatures if key elements of "best practices" were implemented.  So called "legacy" data centers might be difficult to retrofit but they can certainly be upgraded with the basic elements of "best practices"...if only the occupants understood what is possible.

The industry has a massive educational challenge if it is to stem the rising cost and consumption of energy.  And the education cannot come soon enough because the projections are that server power densities will continue to climb and data storage power densities will climb even faster.  Today we talk about 300 watt per square foot densities but systems are being designed already that push almost 10 times that density.  It may seem that we have an endless supply of power from the grid but there are only so many power plants around the world and building a new one takes a decade or longer...data power consumption grows at a much faster rate and will stress grids around the world eventually if we cannot educate the "broader public" more effectively.