Showing posts with label ASHRAE. Show all posts
Showing posts with label ASHRAE. Show all posts

Cybersecurity

You have probably heard or read about "Internet of Things", or "IoT" as it is called.  The numbers of devices being connected to the Internet are staggering with some projections of over 26 billion devices connected by the end of this year.  Many of those devices are going to be HVAC products either via a connected Building Management System or as a "stand alone" device with remote monitoring and diagnostic capabilities.



AHRI recently sponsored a meeting to discuss the security implications of connected HVAC products. It has already been acknowledged that one of the major "hacks" in the last few years (Target stores) was made through the HVAC equipment.  One of the messages of the AHRI meeting was that HVAC equipment is becoming a key target for hackers (either domestic or foreign) due to the lack of rigorous "cybersecurity" protection.  In one study a building system was tested using four attack models and 54 "threat vectors" were discovered.

The need to increase HVAC cybersecurity mechanisms is obvious in the Target case but there are other scenarios that cause concern to the government and utilities.  Many products are now being connected to the electric grid for purposes of load management or to implement real time pricing strategies.  The fear is that lax security at the HVAC equipment level could allow a hacker to penetrate and disable parts, or all, of the electric grid through the same ports used for communication with the grid.  Hacking into a building system that is not isolated from the occupants' business network would obviously open the door to financial information, proprietary product information, and personnel information that could be extremely damaging to a business.  During the meeting it was noted that small businesses that have been hacked have a high probability of going through bankruptcy due to the cost of recovery.

But suppose the HVAC equipment is not connected to the Internet but only to the building management system or even as a stand alone piece of equipment?  Could such a system be hacked also?  I would suggest that, although more difficult, it is entirely possible.  Most modern HVAC equipment operates with a digital control system.  That controller will have a port used for diagnostics or software updates.  A "bad actor" with a laptop and a cable could gain total control of the unit and disrupt a business operation through temperature or ventilation control settings.  Interestingly, in the AHRI meeting, it was noted that the three most common attack pathways were WiFi, Bluetooth, and finally an Ethernet cable....so a physical connection as mentioned above is not even necessary.

The financial, legal, and reputational impact on an HVAC manufacturer whose equipment is used as the pathway for a hack can be substantial.  Unfortunately there are no current cybersecurity standards for HVAC equipment as there are for medical devices, vehicles, military applications, or financial institutions.  A key goal of the AHRI meeting was to identify which current standards might be adapted to the HVAC industry and what role AHRI would play in establishing an industry standard.  There was also discussion of whether or not this should lead to an industry certification process so that manufacturers certify their equipment and processes to serve as an affirmative defense in a case where their equipment was the doorway into a hack.

In the meantime, before an industry standard might be created, manufacturers are warned to establish their own cybersecurity policy...updated frequently...as a means to establish that they are following "best practices" with regard to cybersecurity.  There are a number of cybersecurity policies from NIST, ASHRAE, UL and others that could be modified or adapted by an individual company to create such a policy.  NIST-SP800-171 is one such document that includes a comprehensive check list of security steps that could be used as a model.

The bottom line is that no matter how an HVAC manufacturer chooses to respond to this growing concern some response is better than no response at all.

ASME Paper Documents Reliable Data Center Operation With Outside Air and Evaporative Cooling

One of the longest running and most debated topics regarding data center operation is whether or not you can reliably cool a modern data center using outside air alone or with supplemental evaporative cooling.  If you can successfully operate a data center without using any compressorized equipment there are obviously huge energy and money savings.

The two key factors that hold operators back from implementing an obvious saving strategy is fear of equipment failures due to temperature/humidity excursions and due to airborne contaminants.  While server manufacturers publish data in their specification sheets that clearly indicate that their equipment can tolerate a wide range of temperature and humidity there is not much information regarding the impact of particulates and other contaminants.  ASHRAE has recognized the robustness of modern IT equipment by expanding the recommended and allowable temperature and humidity ranges in their widely followed data center design guidelines.  Very little is said regarding air quality other than a recommendation to use at least a MERV 8 filter system.

Over the last 5 years the Mestex division of Mestek has hosted a National Science Foundation research site at their manufacturing facility in Dallas.  This site is part of an Industry/University Cooperative Research Center with principal research from the Mechanical and Aerospace Engineering Department at the University of Texas at Arlington.  A fully instrumented "data pod" has been operating using a commercially available indirect/direct evaporative cooling system from Mestex that can also operate in 100% fresh air mode.  In addition to the dozen sensors normally included with the Aztec brand IDEC system from Mestex the "pod" includes an array of 64 sensors located on the front and rear of the four server racks.  Data has been streamed from all sensors every 15 seconds for the last 4 years.  In addition to this detailed tracking of temperature and humidity conditions there have been a number of studies conducted using copper and silver coupons to evaluate the corrosion potential of operating using outside air and evaporative cooling.  Keep in mind that this application is in Dallas, Texas...a relatively hot/humid climate area.  In addition, because the "pod" is installed between two manufacturing buildings in an industrial zone near downtown Dallas the measured air quality around the "pod" is classified as G2, or moderately harmful to PCBs.

The June, 2017 Volume 139 edition of the ASME Journal of Electronic Packaging includes a paper presenting the results of the last 4 years of the research at this site.  The paper entitled "Qualitative Study of Cumulative Corrosion Damage of Information Technology Equipment in a Data Center Utilizing Air-Side Economizer Operating in Recommended and Expanded ASHRAE Envelope" provides a comprehensive look at the impact of operating a data center in a "real world" application.

The most interesting point presented in the summary section of the paper is that, in spite of the servers installed in this test site already being several years old, there has not been a single server failure in the entire four years of operation.  The ability to dramatically reduce the cost of operating a data center...without unfounded concerns about reliability...is finally being proven true.

Why Do We Design Thermos Bottles?

Over the last couple of months since my last posting I have been very busy managing our movement into new markets and grasping at new opportunities.  One of the benefits of taking the deep dive into these markets is getting to look at some of the details of product design and application to the specific problem to be solved. 

This has raised a question in my mind.

Why does the mission critical industry design "thermos bottles" and then fret over the cost of and methods of getting rid of the heat that all those servers generate? 


There is something that strikes me as illogical about creating buildings or modular data centers with super insulated walls and ceilings that are guaranteed to trap the heat that is dumped into the hot aisle (assuming they have aisle separation).  Then the mechanical system is tasked with rejecting all of the pent up energy without costing the owner a fortune.  Is it any wonder that data centers are one of the largest consumers of electrical energy in the world?

Centuries ago architects and designers figured out that it is more efficient to cool a space if you simply dump the heat out to the atmosphere.  Buildings used to be designed to take advantage of stratification and stack effect to cause the hot air generated in the space to rise and leave the building.  No need to cool the air back down to a reasonable temperature and put it back into the space so that you can heat it all up again.  Lofted ceilings and roof lines came into the design world for a reason. 

So, why is the data center different?  Frankly, I don't know.  Why not take the hot aisle air and vent it out to the atmosphere?  Sure, you have to replace that exhausted air with new air from the outside but unless the data center is located in Death Valley the odds are that the air being brought into the building is at a lower temperature than the air that would be recycled from the hot aisle of a data center designed to operate under the latest ASHRAE TC 9.9 guidelines for best practices. 

My best guess why we continue to do what is intuitively illogical is inertia.  "We have always done it that way".  I think it is time to rethink the old ways and come up with creative solutions in the design of data centers.

Education and Training

Since I have been traveling extensively over the last few weeks I have not been able to give much thought to our blog.  However, the travels have also provided a little fuel for some comments.

First, I continue to be surprised/pleased to hear more and more presentations and discussions about evaporative cooling of data centers.  It seems that "the big guys" get it...cooling data centers costs a fortune using compressors/chillers and the servers can handle much higher temperatures than people realize.  If you run down the roster of large international web service or cloud service providers you will find that most of them have already implemented evaporative cooling or they have it in the construction plans. 

As great as this is there are still market forces that are conspiring against this highly efficient cooling solution.  One is the concern over humidity levels in the data center.  This concern is compounded by the common use of relative humidity as the conversation point when it is actually absolute humidity that should be considered.  This topic will likely be a point of debate for a long time to come since some of the larger companies have concluded that absolute humidity doesn't matter in their facilities...especially with 2 or 3 year server refresh rates...and other members of this progressive group are not sure and choose the "safe path" of limiting absolute humidity or dewpoint in their spaces.

The one area where it seems that all of the large players agree is with regard to temperature.  It is virtually universal that ASHRAE 9.9 recommended guidelines are acceptable and, for many of these users, ASHRAE 9.9 allowable temperatures are OK.

The challenge for the industry is still finding a way to filter this information and confidence down to smaller operators and owners.  I have heard it described as an education issue but is that truly the case?  It is hard to find a computer or data center related design publication these days that does not promote higher temperatures as a feasible solution for cutting operating costs.  Are we just too busy to read these articles or do we not believe the wealth of research and experience that backs up the statements?

At a recent conference on data center design I sat at a lunch table with a group of design engineers and a manager of 13 data centers.  When asked how he learned about managing those centers the response was that he was self taught by attending conferences and talking to "experienced" data center managers.  So his knowledge of the work by ASHRAE and others was not a major factor in deciding on appropriate operating temperatures.  What he was learning was what these other managers had been doing over the last decade...going back to the "old days" where electric costs were low and low data center temperatures were the norm and research had not shown that to be unnecessary.

So, if education is the issue then how do we go about it?  What mechanism will get the message through the daily clutter of information and time demands?  I don't have the answer...if I did I would implement it immediately....but it seems to be a key to moving the industry forward.

SHOWTIME!!!

I guess it must be that time of the year because the trade shows are starting to pick up steam.  The Dallas division of Mestek, Mestex, has several upcoming trade show events.

First up will be the annual AHR exhibit in New York from January 21 to 23.  Dallas will be exhibiting a demonstration model of the industry's most advanced indirect evaporative cooling system, the Aztec ASC.  This small scale model illustrates the use of high heat transfer copper tube/aluminum fin cooling coils, redundant direct drive plenum fans, and the industry leading DDC control system. You can see this display in booth 1503.


Next will be the annual Rental Show in Orlando from February 8 to 13.  For this trade show Dallas will be exhibiting our new line of Koldwave air-cooled portable air conditioners.  These units are perfect for rental companies.  You can meet with Koldwave sales manager, Jeff Wilson, in booth 1864.

Politics and the Building Industry

Climate Change Initiative


Coal Fired Power Plants in Danger
I am not sure how many folks listened to President Obama's speech this week regarding climate change initiatives.  I know that I was not one.  However, I have read the document that served as the background for the speech and there are some things in this document that folks in the building design community and mission critical world, in particular, should pay attention to.  Those things could have a significant impact on the types of systems that we can design and implement in the coming years.

The theme of the speech and the document is primarily reduction of carbon emissions and increases in "renewable" sources of energy.  There are some other things in the document that are focused on electric generation infrastructure.  However there is a potentially ominous element to that topic that is related to the overarching goal of reducing carbon emissions. 

By means of a "Presidential Memorandum" Mr. Obama has instructed the EPA to accelerate transitioning power plants to "clean" energy sources, i.e. anything but coal.  As we have seen in some other cases in the HVAC industry as soon as the EPA has a mandate of that sort they move quickly to implement regulations that may, or may not, be carefully thought out for the old "unintended consequences" issue.

In my opinion the danger is rapidly removing significant generating capacity from the grid at a pace that cannot be matched on the construction side.  Even though the document also outlines a directive to speed up permitting of power plants it is still a fact that building a multi-megawatt power plant can take years.  With coal being the primary energy source for roughly 40% of US power plants you can see how a too quick implementation of rules that curtail their use can lead to problems.  Many states already operate on the edge of rolling blackouts and brownouts each summer so shutting down or limiting coal fired plants could get ugly.

Exacerbating this problem is the rapid and continuing growth of the data center market.  When these things come on line they gobble up megawatts of generating capacity in a single site...and they can come on line in a matter of months, not years.  Even if they never reach full utilization the power companies must be prepared to provide that power.  ASHRAE and others have tried, somewhat in vain, to communicate that these centers can operate without the heavy energy use of compressors or chillers.  As long as the local electric utility still has generating capacity that can be allocated to the data center this is OK...although not a very "sustainable" approach if you believe in that concept.  But, if that same utility now has to shut down 10 or 20 percent of its generating capacity then there may simply not be enough power to allow the luxury of overly cold air in the data center.

The implications for other building types are similar, although not nearly as extreme.  Systems that optimize the use of outside air as their primary cooling source augmented by smaller compressor or chiller plants could become the basis of design.  Concepts such as chilled beams that utilize higher chilled water temperatures and minimal fan power might need to migrate to smaller buildings than you see them in today.  And building shells will need to make more extensive use of passive and active shading systems.

So, once again, the building industry is going to be impacted by external forces that may have the best of intentions but that will also require rethinking of how we design and operate those buildings.

Trusting The Weatherman

Designing to a Standard


It is an interesting fact that many projects are "over-designed".  This is nothing especially new but it seems that we are seeing more of it lately.  As an example we are currently working on a project that will be located north of Detroit but is being designed to operate at temperatures that exceed the ASHRAE 0.4% design criteria for Phoenix.  On the surface this seems to be overkill in the extreme.  The increase in capital costs for equipment that will probably never have to perform to that level could easily drive the project over budget.

The psychology behind making design decisions of that type basically indicates a lack of confidence.  The end-user chooses to ignore the ASHRAE climactic weather data and recommended design points because he or she lacks confidence in the data.  Personal experience of temperatures that exceed the published design conditions add to the lack of confidence in the recognized standard.  ASHRAE has tried to address this by also publishing the 10, 20, and 50 year maximum (or minimum) recorded temperatures.

This criteria is similar to the "100 year flood" criteria that civil and site planning engineers use.  Many of us have seen, or experienced, times when the "100 year flood" line has not only been crossed but crossed multiple times.  At a recent meeting that I chaired we had a presentation by a well regarded environmental and site planning engineer.  The presentation showed how the location of various coastal high water design lines have changed over the last few years...moving further inland and changing the flood insurance status of existing structures that were originally well outside of the potential flood area.

Could it be that the climate is actually changing as many people suggest?  Do we need to revisit our temperature design criteria more often?  The alternative is to ignore the standard and add an arbitrary "risk premium" to the design criteria...adding costs that might not be necessary.

When I was a young consulting engineer many years ago I was told to design to the ASHRAE design points.  One reason was that by using a recognized standard I could always fall back on that point as evidence that I had used proper engineering practices in my design.  The nature of mechanical equipment was such that most systems ended up over-sized anyway and could throttle their performance to meet the criteria.  If owners today add a "risk premium" to their design criteria...and then the mechanical equipment also ends up over-sized...then the capital costs and system capacities are doubly over stated.  As we move towards a market where building operating characteristics are posted by the front door, much like an automobile's gas mileage rating, the practice of arbitrarily over-sizing systems will put some owners at a disadvantage when it comes time to lease the space.


Mestex Hosts Independent Representatives at ASHRAE

Mestex Representatives Attend ASHRAE

With the annual ASHRAE/AHR meetings and exhibits in Dallas for the first time in 6 years, Mestex took advantage of the opportunity to host over 100 independent Mestex reps at the Mestex facility. We were also joined by a number of Mestek corporate employees including Stewart Reed, Mestek CEO.



Mestex DDC Dashboard
Test Area Demonstration
The reps were provided with guided factory tours that included presentations at four key areas in the plant...the gas-fired products test area, the hydronic products test area, the "Mestex Mall" show unit area, and the top secret Mestex R&D area. In addition to highlighting the extensive final test processes that every Mestex product endures the tour also highlighted the latest version of the Mestex DDC control system with full web-enabled interface and user "information dashboard".


"Dallas" Based Theme
Following the tours the reps gathered in the stage area that was set up in the plant for formal presentations on new software technologies that Mestex is introducing in 2013, a more detailed look at the DDC "dashboard", and a glimpse into a huge new sales opportunity. The presentations wrapped up with the introduction of the 2013 Sales Incentive program. The overall formal presentations were introduced by Mestex personnel who played the parts of characters from the TV series "Dallas".
Mestek Booth at ASHRAE/AHR Show

Over the following three days, Mestex personnel hosted a number of engineer and customer visits to the facility and also attended the ASHRAE/AHR show as part of the large Mestek contingent.

 

The MEP World Collides



Well, this weekend the MEP world (Mechanical, Electrical, and Plumbing) will converge on Dallas for the AHRI/ASHRAE annual meeting and trade show.  This is the one place where many engineers and contractors can view the vast assortment of products and services that are used in making the buildings we work and live in functional. 

The exhibit company says there are 3,500 booths set up in the Dallas Convention Center.  That is a lot of different companies that are invested in the construction industry...and many companies in the industry are not even showing!

Of course, our parent company Mestek will be on the show floor with a wide variety of products ranging from residential baseboard to machinery for producing ductwork for commercial buildings.  Look for the most diverse company in the HVAC industry in booths 2632 and 2845.

As for Mestex itself since we are based in Dallas we have a unique opportunity.  On Sunday, the 27th, we have over 100 sales representatives visiting our factory to learn more about what Mestex has planned for 2013 and to tour the facilities and see some of the hardware.  Then Monday through Wednesday we will be hosting several engineer and contractor tours of our operations. 

This is an exciting way to kick off the new year for Mestex and for the entire industry.  It also shows just how diverse our industry is and just how important to building occupants our industry can be.

How I Spent My Summer Vacation

It has been a while since I have posted anything to this blog...no, I was not on sabbatical on some desert island...I have been traveling around North America talking to consulting engineers, contractors, and data center owners and operators.  This posting just provides a few insights that I garnered over the last 2 months on the road.

First, the data center/mission critical market continues to occupy the minds and the design resources of many, many companies in the design community.  It is clear that this is a market segment that is vibrant and all indications are that it will continue to be for quite some time to come.  The latest issue of Datacenter Dynamics FOCUS indicated that the world is now consuming over 300 Tkwh annually to drive data centers, with the US consuming over 25 Tkwh alone.  The consumption in the US is projected to grow over 9% in 2013.  While this information points to a growing market it also points to the urgent need for improved operating efficiency in data centers.

Second, and related to the first item, is the lack of knowledge about new "best practices" in data center design.  I have talked to dozens of engineers, contractors, and data center people who are not aware of the latest design guidelines from ASHRAE.  In fairness, those guidelines were only officially announced a few weeks ago...but they have been rumored and discussed for about a year now.  I mentioned in one of my earlier posts that education of the design community is an important, and ongoing, task.  This has been reinforced to me over the last 2 months.

Third, for those engineers and contractors who understand and embrace the new standards, is the challenge of convincing the data center people to adopt those standards.  This is less of a problem at the top levels of the data center company than it is on the floor of the data center.  The IT equipment operators who live in "the white space" seem not to understand the allowable operating temperatures of the equipment that they manage every day.  I have heard many different reasons for their reluctance to adopt the new best practices but I think it comes down to fear.  Because of stringent SLAs the operators worry about losing any equipment for any period of time...even though there is mounting research that this fear is unfounded.

Fourth, I have heard of several cases where the local electric utility has started to put limits on the available service capacity for planned centers.  In the US we are so comfortable with the idea that our electric grid can provide unlimited power that we forget that is not true.  We have a fixed number of powerplants with only so much generating capacity.  With the tremendous growth of data centers, and data centers with 300 to 500 watt per square foot electrical demands, there is a limit to what a utility can do.  And timing is another element of the equation.  A data center can be built in a matter of months...a powerplant takes years.  So even when a utility sees the demand coming they cannot add capacity as quickly as the demand can be added.

So, these are a few observations from the last couple of months.  Of course there is more to the story and feel free to comment on this post with any questions you might have.  I will try to respond as quickly as possible.

Preaching to the Choir

Electrical Power Meters Keep Spinning
I have had a busy few weeks traveling to meetings and visiting with owners, operators, engineers, and researchers.  This has given me an interesting perspective and awareness of an issue that our industry needs to address.  My awareness of this issue was increased by an editorial in Mission Critical Magazine that bemoaned the lack of progress in data center design due to secrecy regarding "best practices".

I came away from all of those meetings with the sense that there are many very smart people who know how to design more efficient solutions to energy use in mission critical applications.  "Best practices" can be described by experts from the largest server manufacturers, global data center developers/operators, and from academia.  The issue is that we are all sitting around a large table in a closed meeting room and sharing that knowledge with others who already have a pretty good idea what to do.  We are "preaching to the choir".

The result is that the vast majority of data centers, server rooms, and telecom facilities are operating in very inefficient ways.  While a Microsoft might be able to design a data center with a 1.2 PUE the rest of the world is struggling to reach a 2.0. 

This came out in a technical committee meeting at ASHRAE's mid-year meeting a few days ago.  A comment was made by a server cooling system manufacturer that he finds it very difficult to convince smaller users to adopt the latest operating standards that could save the user tens of thousand of dollars a year in energy costs.  This sentiment was echoed by several around the room and pointed to how difficult it has been to educate the broader public on the reliability of modern equipment in warmer rooms.

And when I say "broader public" I mean just that.  The mechanical design director for a global retail data center operator told me that he knows his equipment will run just fine at 78 or 80 degree F inlet temperatures but his customers have not gotten the message and demand a "cold" room.  It seems that until corporate IT managers and executives understand all of this we will continue to see skyrocketing energy use by data centers.  Even small server rooms could benefit from elevated temperatures if key elements of "best practices" were implemented.  So called "legacy" data centers might be difficult to retrofit but they can certainly be upgraded with the basic elements of "best practices"...if only the occupants understood what is possible.

The industry has a massive educational challenge if it is to stem the rising cost and consumption of energy.  And the education cannot come soon enough because the projections are that server power densities will continue to climb and data storage power densities will climb even faster.  Today we talk about 300 watt per square foot densities but systems are being designed already that push almost 10 times that density.  It may seem that we have an endless supply of power from the grid but there are only so many power plants around the world and building a new one takes a decade or longer...data power consumption grows at a much faster rate and will stress grids around the world eventually if we cannot educate the "broader public" more effectively.

Green Grid Updates Free Cooling Maps for Data Centers

The Green Grid has released White Paper #46 as an update to their "free cooling" maps for data center design and operation.  The research was edited by Emerson Network Power, Intel, and Schneider Electric. 

The reason for this update to the "free cooling" maps was the latest changes to the ASHRAE TC 9.9 operating/design guidelines for data centers.  For those who have not yet seen those new guidelines they allow a much larger operating range for data centers and server rooms that use some of the latest equipment from companies like Dell and HP.

For those of us who are "metric challenged" 40 degrees C = 104 degrees F and 35 degrees C = 95 degrees F.

When you consider that many data center operators still seem to want their rooms at 70 degrees or lower it is clear that these new criteria are a massive change in operation and design concepts.  It is also clear that adopting the newest guidelines can result in enormous energy savings.

The Green Grid paper includes a couple of maps to quickly illustrate how extensive the potential for "free cooling" has become under the latest operating/design guidelines. In these maps the darker the blue color the more hours that "free cooling" could be employed.  The darkest color blue indicates that all 8760 hours are suitable for "free cooling".  The maps also consider the coincident dewpoint temperatures as that metric is important also.

This first map is for ASHRAE Class A3 environments and shows that virtually all of North America could have their data centers cooled without using chillers or compressors.  The second map is for ASHRAE Class A2 environments and shows that roughly 80% of North America could still be cooled most of the year with no chillers or compressors.

The question for data center operators and designers who want to implement these new temperatures is what to do about those 500 or 1,000 hours when the outside air conditions are not quite right.

It is still quite possible to operate the center with no compressors or chillers if the designer will incorporate an evaporative cooling system such as the Aztec indirect evaporative cooling system or even the Alton direct evaporative cooling system.

Since evaporative cooling systems operate using 100% outside air all the time they make an excellent "hybrid" approach.  During the many hours of the year when "free cooling" will satisfy the conditions either type of evaporative cooling system will provide cool, filtered, outside air.  The Aztec indirect evaporative cooling system has the added advantage of allowing recirculation of hot aisle air during the very coldest months when "free cooling" could actually over-cool the data center.

During those few hours of the year, however, when it is simply too warm for "free cooling" to work, the Aztec or Alton systems can automatically initiate their evaporative cooling cycles and trim the outside air temperatures down to levels that fall well within the new ASHRAE guidelines...again, with no compressor or chiller energy required.  The air leaving the evaporative cooling system will usually be about 3 degrees F higher than the wet bulb temperature.  This chart should give you an idea of the potential air temperature that an evaporative cooling system can provide.

The Green Grid whitepaper is just the latest in a growing number of research papers and documents that point operators and designers in a direction that can save tens of thousands of dollars and kwh if they are willing to make the investment in the latest technologies from both the IT equipment manufacturers and the HVAC equipment manufacturers.


Recently there was an interesting article published in Mission Critical magazine that addressed cooling in data centers.  More specifically the article addressed the waste that is currently happening in many, many data centers by operating the center at too low a temperature.

ASHRAE TC 9.9, at the urging of IT equipment manufacturers, has been raising the recommended and allowable temperature and humidity ranges for all types of IT equipment.  There are now certain classes of equipment that have allowable operating temperatures of 113 degrees F and 80% RH...but we still see data center designs that call for 60 to 70 degree air entering the servers.  Even the most critical classes allow temperatures of 80 degrees F and 60% RH.

One of the reasons that is often presented for operating the center at such low temperatures is reliability.  There is now research that suggests that this is not a valid concern.

The article in Mission Critical magazine, authored by Mark Monroe, cites a few interesting bits of information. 

Using the Arrhenius model  for predicting MTBF (mean time between failure), raising the server inlet temperature from 77 degrees F to 104 degrees F reduced the MTBF from 15 years to 13 years...both probably well beyond the replacement cycle for the servers.  Given that prediction why run servers at even 80 degrees?

A second study by E. Pinherio, W.D. Weber, and L. A. Barroso’s (2007), “Failure Trends in a Large Disk Drive Population,” determined that there was no discernible relationship between disk drive failures and operating temperature.

Finally, Intel provided information to ASHRAE that allowed creation of a reliability factor calculation that was time and temperature based.  This "X-Factor" could be used to estimate changes in reliability from a baseline temperature of 68 degrees F.  The interesting thing that comes from this is that using an indirect evaporative cooling system, such as the Aztec ASC product line that can provide server inlet temperatures lower than 68ºF for the vast majority of the year, could actually increase reliability according to the algorithm.

The potential operating cost savings are huge.  Switching to the Aztec system that offers 100% outside air cooling most of the year, with supplemental evaporative cooling during the extreme highs, could save $67,000/1,000 kw of IT load for the average data center in the US...according to the information in the article.
Prineville Server Farm with "free cooling"
Data Center Dynamics is an international organization with a single mission of sharing best practices among data center designers and operators around the world.  The organization publishes a trade magazine called "Focus" and they have just released their January, 2012 edition.  This edition is a retrospective look at 2011.

One of the articles included comments from some of the industry's leading players in response to two questions:  "What was the most important data center development of 2011?" and "What single advancement will most positively impact the data center sector in 2012?"

Some of the responses were:

Bill Kosik; Principal data center energy technologist, HP Enterprise Business Technology Services:

"For the first time in 2011, many of our clients wanted to implement a design temperature of 75 degrees F for the inlet air to the IT equipment."  "When you couple increased supply air temperatures with ultra-efficient air-conditioning equipment (indirect evaporative cooling as an example), you start to see PUEs drop into the low 1.2s/upper 1.1s..."

Andrew Donoghue; Analyst, The 451 Group:

"ASHRAE released a white paper....redefined and reclassified new allowable ranges up to 113 degrees F.  Higher operating temperatures could mean that new facilities can be built without the need for expensive cooling technology, such as mechanical chillers."

Dileep Bhandarkar; distinguished engineer, Global Foundation Services, Microsoft:

"Broad recognition across the industry that free air cooling technology is now considered mainstream."

Jim Hearnden; Product technologist, data center power and cooling, Dell Services:

"Newer technology will permit higher server intake temperatures, which will be a great step forward in 2012."

The common thread through all of the comments is the drive to lower energy costs by raising server inlet temperatures.  Most of the more advanced companies are even going to the point of using 100% outside air with no tempering at all.  Aztec indirect evaporative cooling systems from Mestex, a division of Mestek, offer an alternative that filters and cools the air down to within 2 degrees of the wet bulb temperature (usually in the 70 to 80 degree range).  This allows the designer and operator to have acceptable server inlet temperatures and still have a very low PUE.  For installations that still need some degree of control over the air temperature and desire filtered, clean, air this might be the best solution.

Ten Reasons to Tone Down on Climate Control

Sometimes the best thing to do is to acknowledge when someone else does something right.  In this case Nicholas Greene posted an article on TechAxcess that sums up the ten reasons why data center designers and operators should adopt modern cooling design criteria for their centers.  Nicholas clearly articulates the reasons and I cannot improve on what he says.

I can only reinforce the message that there are alternative cooling methods that can provide cool...not cold...filtered, clean, air for data centers.  Aztec Indirect Evaporative Cooling Systems from the Mestex division of Mestek can send 65 to 80 degree air to the cold aisle and allow the designer to exhaust 100% of the heat from the hot aisle without using any refrigerants or compressors.  The result is cold aisle conditions that meet the latest ASHRAE TC 9.9 criteria and that address the issues that Nicholas covered in his article.

As Nicholas says...it is time for operators and designers outside of the "big names" to get on board and start implementing these energy saving technologies.

Agency Anarchy

The AHRI annual meeting always presents interesting bits of information regarding things that not only impact the HVAC industry but, ultimately, the consumer.  This year is no different and the unfortunate truth is that few people outside of the meeting attendees ever hear about what is about to happen to them.

So what is the story this year?  The story is legislative gridlock leading to "agency anarchy".  AHRI and environmentalists have reached a consensus agreement that both industry and the public could live with regarding energy efficiency for residential and commercial HVAC equipment.  Before that consensus agreement can become the law of the land it must be ratified by the US Congress...and that is where the problems began.  The consensus is hopelessly stuck in Congress and going nowhere for the foreseeable future.  So into the void steps the Department of Energy.  The DOE has the authority to create requirements and that is what they are doing.

DOE is setting efficiency standards and starting to require verification tests independently of the long established AHRI standards and verification standards.  Manufacturers are faced with double standards and double testing.  All of that testing costs money and ultimately those costs are past through in the cost of the equipment.

Further complicating the issues and adding costs are states that also see the void and step in with their own requirements.  Some of these states are looking to Europe where standards are moving even faster to increase efficiency and decrease environmental impacts.  While these are worthy goals the lack of consistency creates uncertainty for manufacturers.  Manufacturers will often simply design to the worst case scenario and leave all customers facing higher costs.

More standards are coming from other sources as well.  ASHRAE is proposing revisions to their Standard 90.1 that would require a 50% improvement in building efficiency and all but outlaw certain types of equipment  The Canadian province of British Columbia is proposing codes that would require manufacturers to provide a means of recycling HVAC equipment...again raising costs.

The bottom line is that costs are under all kinds of hidden pressures that will ultimately land on the consumer.

Dell Talks High Temperature Equipment


Although the new ASHRAE TC 9.9 Temperature Guidelines for Data Centers have not yet been published some IT equipment manufacturers are already jumping on the new criteria.

Dell Computers of Austin, Texas have stated that much of their product line is already suitable for the new A3 and A4 temperature ranges being published by ASHRAE later this year. Dell has gone further to state that they will warranty those products up to A4 levels (114 degrees F and up to 90% RH) even though the products were purchased within the last few months...prior to public exposure of the new guidelines.

Although many in the IT industry have predicted that class A3 and A4 equipment will be more expensive it appears that Dell is challenging that thought with this action on some of their legacy equipment.

As always there are the caveats regarding proper spacing of equipment and installations in hot aisle/cold aisle environments with proper airflow but it appears that the future of class A4 equipment is closer than we thought.

Server Reliability and Outside Air Cooling


As ASHRAE and the IT industry have been pushing the temperature and humidity boundaries for servers and IT equipment higher and higher one of the inevitable questions from data center operators is “what about failures?” There is an assumption that these higher temperatures will lead to much greater downtimes…a situation that data centers cannot afford.

So as part of their work prior to publication of the new standards later this year, ASHRAE TC 9.9 developed a simple tool for estimating the increase in failure rates from various inlet temperature strategies. This tool was developed by Intel based upon their history with server applications of their chips.

The methodology will be quite familiar to most HVAC engineers as it relies upon ASHRAE Bin Hour weather data. Factors were developed called “x-factors” (not related to the TV program) that reflect the relative reliability in a particular temperature bin compared to the baseline server inlet temperature of 20 degrees C, or 68 degrees F. If the x-factor is less than 1.0, then reliability is considered to be better. If the x-factor is greater than 1.0 then reliability is considered to be worse.

However the committee recognized that the latest data center design practice uses outside air economizers or evaporative cooling instead of mechanical cooling. The result of this practice is that the server inlet temperature will vary with outside air temperature. X-factors were developed for six temperature bins from 15 degrees C (59 degrees F) up to 45 degrees C (113 degrees F). Each bin is 5 degrees C wide (9 degrees F) to account for the temperature rise through the outside air handler or inefficiencies in the air distribution system.

The research behind the methodology indicated that the effect of operating at various temperatures is additive. In other words, operating for 100 hours above 20 degrees C (x-factor over 1.0) and then operating at 100 hours below 20 degrees C (x-factor below 1.0) can yield the same reliability as operating for 200 hours at a steady 20 degrees C.

So the approach to calculating the reliability of the servers is:

  1. determine the total number of hours above 15 degrees C for a particular geographical location
  2. divide the number of hours within each of the six bins by the total hours to get the percentage of operating hours in each bin
  3. multiply the percentage of hours times the x-factor for that bin
  4. add the results to get the composite x-factor for the location
  5. multiply the current failure rate by the composite x-factor to get the new failure rate

There are several examples in the Appendix of the ASHRAE TC 9.9 white paper that show that even very hot, or hot and humid, cities end up with reliability figures that are better than one might expect. For example, the composite x-factor for Phoenix in their examples ranged from 1.2 to 1.4 (depending upon cooling method). If the normal failure rate for data centers in Phoenix is 0.2% then the outside air cooled data centers would have failure rates of 0.24% to 0.28%. Looking at this another way, if the normal data center in Phoenix had 1,000 servers then they would normally lose 2 servers in a year. By switching to outside air or an evaporative cooling system such as Aztec the algorithm would predict losing 2.4 to 2.8 servers per year…just one more server per year.

More detail can be found in the TC 9.9 white paper. The paper can be downloaded from http://tc99.ashraetcs.org/ .

New ASHRAE Temperature and Humidity Guidelines


At the recent ASHRAE Annual Meeting in Montreal there were a number of sessions related to data centers. One of the sessions included a presentation by Robin Steinbrecher of Intel Corporation. As a member of the ASHRAE TC 9.9, Robin was selected to present the upcoming new temperature and humidity guidelines that are due for publication this fall.

The 2011 expanded range is split into six types of IT equipment applications. Four of the six are directly related to data centers and larger server rooms. Those four classifications are listed as A1 through A4. Class B is for office desktop, home computer, and laptop environments. Class C is for Point-Of-Sale or industrial, ruggedized, computers.

The A1 class is defined as enterprise level, mission critical, servers. These might be used for financial institutions or some government installations. Class A2 is a typical IT space for office, lab, or typical corporate installations. Classes A3 and A4 are similar to A2 but with more robust servers that are designed to operate at higher temperatures.

The new temperature and humidity guidelines are quite broad:

Class C has an allowable upper limit of 104 degrees F and 80% RH.
Class B has an allowable upper limit of 95 degrees F and 80% RH.
Class A4 has an allowable upper limit of 113 degrees F and 90% RH.
Class A3 has an allowable upper limit of 104 degrees F and 85% RH.
Class A2 has an allowable upper limit of 95 degrees F and 80% RH.
Class A1 has an allowable upper limit of 89.6 degrees F and 80% RH.


There are a few caveats attached to these new, higher, limits such as proper hot aisle containment in data center type applications or purchasing A3 or A4 rated equipment where appropriate. But the bottom line is that the old way of designing data centers and cooling data centers is quickly being abandoned by even the most conservative of designers.

There will be additional articles based on the ASHRAE Annual Meeting data center presentations.

Have you ever wondered where all that power goes in a data center?

Data Center Power Use
This diagram illustrates how the various components in a typical data center, designed according to old practice, contribute to overall data center energy use.

The interesting, and somewhat disheartening, thing for managers and operators of old-school data centers is that of the available power from the local utility the center can only use 30% of it for the servers. This is disheartening because the IT equipment is the only part of this diagram that generates revenue or provides a service to the center's customers. The remaining 70% is nothing but an expense...and a big one at that.

That is why modern data center designs have started to use evaporative cooling or economizer cooling whenever possible. ASHRAE raised their temperature guidelines in order to keep up with the server industry that is working hard to let data center operators shift more of that wasted 70% over to the revenue producing part of the diagram. With economizer cooling almost the entire top 45% of the diagram goes away. Considering that you must still run fans to move the air into the space it is probably still safe to say that the revenue producing portion of the diagram can grow from 30% to 66%.

In many cases, though, the available outside air temperatures are just a bit too high to allow full economizer cooling. In those cases, the most energy efficient alternative is evaporative cooling. By applying evaporative cooling, rack inlet temperatures can be well within industry guidelines almost anywhere in the world...and the energy consumption is similar to the pure economizer solution.

Evaporative cooling gives the data center operator the best of both worlds...rack temperatures closer to old-school temperatures...and energy consumption closer to pure economizer solutions.

When using a fully configured evaporative cooling system from a company like Aztec the operator also gains a factory-assembled and pre-tested system that is basically "plug and play". Connecting water and power is all that is needed to get the system up and running...allowing the center to start producing revenue faster. If web-enabled communication with the Aztec unit is desired then adding the unit to a local area network can be as straight forward as adding a printer. Each unit is configured with its own IP address and the entire installation can be password protected.