Showing posts with label sustainability. Show all posts
Showing posts with label sustainability. Show all posts

The Texas Experience

 Once again it has been many months since I have posted anything. Since then we have obviously been dealing with the pandemic as well as learning the impact of a weak cybersecurity policy. Although I could probably write a novel regarding the cybersecurity issue there are many more expert people on that topic.

Today I want to go back to the original purpose of this blog that was to capture and share some random thoughts related to the industry.

One of the latest events that started the wheels turning in my head again was the partial shutdown of the Texas electrical grid. Again there are experts in grid design and operation who can, and have, discussed this event in great technical detail. There has also been the, now expected, political finger pointing and partisan debates. But I wanted to share some of those random thoughts about what this "localized" event might teach us.


Many engineers, especially those involved in critical HVAC infrastructure, are already aware of the fragile nature of the electrical grid in many parts of the US. At Mestex we provide systems to many different types of applications where a power outage could have a costly impact. Preparing for those potential failures was usually someone else's problem. The electrical engineers and suppliers of standby power systems were intended to handle the "short" periods without electrical power. Although we researched ways to integrate backup operation into our equipment the results would have proven too expensive to be marketable. What Mestex has continued to focus on is applying systems that try to utilize "site resources" efficiently in order to reduce the demands on the backup systems.

Looking at a broader picture we have seen the global trend toward "electrification". The goal, it seems, is to reduce greenhouse gases and other atmospheric pollutants that contribute to climate change. While some are still skeptical about climate change it has reached the point of a consensus among scientists around the world. Many countries and large corporations are on board with taking steps to mitigate their impact. Electrification is intended to move the source of pollutants away from the "site" where power is used to the "source" where power is generated. In theory this would allow better control of contaminants at a single point instead of at hundreds or thousands of "site" points. This would also facilitate the use of alternative energy sources such as wind generators that would be difficult to implement at the "site" level. So we have a relative rush to requiring electrical vehicles, electric residential heating systems, and even electric commercial/industrial heating.

At the same time that electrification is moving forward in areas that the average person can see there is a convergence of our ever increasing digital life with our daily power consuming life. Data centers are being built and turned on almost daily around the world. To the average person this is great as it means they are always connected no matter where they go. This also helps the electrification effort by providing the opportunity for sophisticated remote traffic management, demand control power distribution, and "smart" home appliances. But the electrical power consumed by data centers is almost mind blowing to the average person. A single, moderately efficient, small data center can consume as much electrical power as five thousand homes. Clusters of large data centers (as is common due to scale and locale) can draw enough electrical power to support entire towns or even entire less populated countries.

When these data centers or data center clusters are inserted into an already fragile electrical grid they add a strain factor that was not anticipated when the power station was designed 20 or 30 years ago. Data centers can be designed, built, and activated in months versus power stations that require years to complete. It is inevitable that a mismatch of power supply and power demand will occur.

It seems to me that part of what the Texas experience showed us is, first, electric power is critical to basic life support facilities such as water and sanitation. My second thought is that as much thought and research should be put into the development of highly efficient, "clean", "site" energy systems as into the electrification idea. Off-loading the grid with effective "site" solutions could help with the balance of supply and demand on the grid. Many large companies have already taken steps with solar arrays over their parking lots, small-scale wind generators on site, or private co-generation plants. In most cases though these are extremely expensive solutions. Their implementations have been driven as much by corporate "green" initiatives as anything.

Companies should also not lose sight of current technologies that are still viable "site" solutions and counterbalances to grid overloads. Although Mestex has transformed itself over the last few years into generating more revenue from cooling solutions than from their traditional natural gas heating solutions most people still consider the company to be a gas heating company. In applications that require large amounts of outside air, or that simply move huge amounts of air that must be heated, a modern and efficient natural gas heating system is a much more "climate friendly" "site" solution than an equivalent electric heat "source" solution. Mestex can provide such systems based on their decades of manufacturing such systems and research into optimized digital control of such systems. 

 Engineers and companies can meet their goals of responsible environmental stewardship by keeping in mind the contribution of "site" solutions as they also work to meet the transition to greater electrification.

Disrupting Distruptive Technologies

I recently had a chance to investigate an interesting product concept that has the potential to significantly change the way some systems are cooled.  The term "disruptive" came to mind and that caused me to think about "disruptive technologies" and what it sometimes takes to bring them to fruition.

It seems that there are factors that stand in the way of rapid commercialization of products that have the potential to truly change an industry.

Fear....a technology or product that can disrupt an industry will tend to be resisted by the incumbents in that industry.  The greater their current investment in their current technology the harder they will push back against the new technology.  The fear of losing their market to the "upstart" will either lead to: trying to ignore it and hope it goes away; or actively looking for weaknesses and promoting heavily against it.  In the HVAC industry, and many others, the "upstart" is usually a small company without the staying power to eventually convince an incumbent company to take a chance on their technology.  Eventually, the "upstart" does actually go away from lack of resources to sustain the fight.

Liability...in order for a new product to see the light of day in the construction industry, in particular, it must be specified or recommended by consultants or owner influencers.  Consultants, especially, pay large premiums for professional "errors and omissions" insurance.  The definition of "errors" extends beyond simply making a mistake in a calculation to failure to use "industry standard" practices.  Since most disruptive technologies do not come to the market with vast installed bases that would begin to qualify them as an "industry standard" practice, consultants will shy away from specifying them.

Collateral consequences....sometimes the technology can be proven in laboratory or field trial cases successfully but widespread adoption requires the participation of other industries and their decision makers.  If the new technology requires an unrelated industry to make significant changes to how they do business, or design their products, the disruptive technology could end up stillborn due to the actions...or inactions...of companies outside of their own industry.

Finally...communication...by their nature disruptive technologies are technologies that others have not considered or visualized.  If the inventor of the new thing and the team behind it cannot clearly and demonstrably communicate the value of the product then it will either go nowhere or take an extremely long time to find acceptance.  In Mestex' participation as a sponsor of a National Science Foundation research consortium we see this almost every month.  Teams of extremely bright young engineers and scientists work diligently to come up with the "next big thing" but fail to communicate the value to companies that could commercialize the idea...if they only understood it clearly.

It is often said that the construction industry has been the slowest to adopt new techniques and is among the least efficient industries.  Perhaps the ideas to change that are already incubating or being tested...but fear, collateral consequences, and communication are standing in the way.

Quality

Over the years that I have been at Mestex I have marveled at how well we have been able to control our warranty expenses.  Having come from one of the large HVAC product manufacturers where it was not uncommon to have warranty expense at 3 to 4 percent of sales our average of less than 1 percent of sales is extraordinary.

Of course, this level of product quality does not happen by accident.  Selecting components that are designed for long life, using material gauges that are one grade heavier than most competitors, constructing many products with welded steel frameworks... all contribute to products that are designed to last.  But a great design can fall down at the production level so we have also implemented laser alignment systems, rotating component balancing systems, multi-point functional testing of every product that leaves the building, and compliance with all industry standards for safety.  We are confident that when a product leaves our building that it has been built to a high standard for longevity, service, and efficiency.

But there are standards outside of our normal industry standards that can improve our quality beyond even the product itself.  One of those internationally recognized standards is, of course, ISO 9001.  Reaching into all elements of our business and documenting how we do things in an effort to make all phases of the business better is an expected result of attaining an ISO certification.  Mestex is proud to announce that we have now received such a certification and we are now an ISO 9001-2008 certificate holder. (Certificate No.: TRC 00937 issued to Mestex, Dallas)

We don't intend to stop there however and we have already started work on attaining an ISO 14001 certification.  This process is targeted at our environmental practices and policies.  Although Mestex has taken major steps over the last few years to reduce our environmental impact we believe that there is more that we can do and that is our next target.

Kit Car or Ferrari?



Sticking with my motorsports analogies for a while...do you want to buy a "kit car" or a Ferrari even if the Ferrari costs more?


I am a big fan of Top Gear and watch most every episode at least once.  From time to time they will air an episode that features some sort of "kit" or one-off vehicle from a small manufacturer or garage.  Often times these vehicles have breathtaking performance and attractive costs when compared to a Ferrari.  But they usually end the broadcast with the conclusion that, great as the "kit" car might be, they would not buy one.

Time after time these "one-off" vehicles break during Top Gear's testing.  When that happens the crew has to wait for unique parts to be delivered or sort out the problem without the benefit of an owner's manual or factory service department to call.  Even in cases where the vehicle performs well they find that it cannot be licensed for the street or has no DMV certifications.  So they conclude that a buyer would be better off to spend a bit more and purchase a vehicle that is tested, certified, and supported by a company that is large enough to stand behind their products for the long haul.  Both solutions can provide exhilarating transportation but only one can be counted on to provide that transportation for as long as the owner keeps it.

The same question should be asked regarding HVAC equipment.  There are many small businesses that can purchase components, sub-contract the sheet metal, and hire temporary help to assemble units.  But the end user is basically getting the HVAC version of a "kit car".  The first cost might be attractive but certifications, IOMs, application and testing expertise, technical support, and supplier financial strength will all typically be important missing ingredients.

For some speculative buildings the "kit car" solution might be chosen because the developer knows he will soon pass the potential problems off to the new owner.  But for larger corporate owners and developer/owners is the "kit car" solution really the right answer?  Those end-users are investing in a building that will last years...they deserve an HVAC solution that matches that time frame from a company that will also be around to provide support for the duration.


Politics and the Building Industry

Climate Change Initiative


Coal Fired Power Plants in Danger
I am not sure how many folks listened to President Obama's speech this week regarding climate change initiatives.  I know that I was not one.  However, I have read the document that served as the background for the speech and there are some things in this document that folks in the building design community and mission critical world, in particular, should pay attention to.  Those things could have a significant impact on the types of systems that we can design and implement in the coming years.

The theme of the speech and the document is primarily reduction of carbon emissions and increases in "renewable" sources of energy.  There are some other things in the document that are focused on electric generation infrastructure.  However there is a potentially ominous element to that topic that is related to the overarching goal of reducing carbon emissions. 

By means of a "Presidential Memorandum" Mr. Obama has instructed the EPA to accelerate transitioning power plants to "clean" energy sources, i.e. anything but coal.  As we have seen in some other cases in the HVAC industry as soon as the EPA has a mandate of that sort they move quickly to implement regulations that may, or may not, be carefully thought out for the old "unintended consequences" issue.

In my opinion the danger is rapidly removing significant generating capacity from the grid at a pace that cannot be matched on the construction side.  Even though the document also outlines a directive to speed up permitting of power plants it is still a fact that building a multi-megawatt power plant can take years.  With coal being the primary energy source for roughly 40% of US power plants you can see how a too quick implementation of rules that curtail their use can lead to problems.  Many states already operate on the edge of rolling blackouts and brownouts each summer so shutting down or limiting coal fired plants could get ugly.

Exacerbating this problem is the rapid and continuing growth of the data center market.  When these things come on line they gobble up megawatts of generating capacity in a single site...and they can come on line in a matter of months, not years.  Even if they never reach full utilization the power companies must be prepared to provide that power.  ASHRAE and others have tried, somewhat in vain, to communicate that these centers can operate without the heavy energy use of compressors or chillers.  As long as the local electric utility still has generating capacity that can be allocated to the data center this is OK...although not a very "sustainable" approach if you believe in that concept.  But, if that same utility now has to shut down 10 or 20 percent of its generating capacity then there may simply not be enough power to allow the luxury of overly cold air in the data center.

The implications for other building types are similar, although not nearly as extreme.  Systems that optimize the use of outside air as their primary cooling source augmented by smaller compressor or chiller plants could become the basis of design.  Concepts such as chilled beams that utilize higher chilled water temperatures and minimal fan power might need to migrate to smaller buildings than you see them in today.  And building shells will need to make more extensive use of passive and active shading systems.

So, once again, the building industry is going to be impacted by external forces that may have the best of intentions but that will also require rethinking of how we design and operate those buildings.

Trusting The Weatherman

Designing to a Standard


It is an interesting fact that many projects are "over-designed".  This is nothing especially new but it seems that we are seeing more of it lately.  As an example we are currently working on a project that will be located north of Detroit but is being designed to operate at temperatures that exceed the ASHRAE 0.4% design criteria for Phoenix.  On the surface this seems to be overkill in the extreme.  The increase in capital costs for equipment that will probably never have to perform to that level could easily drive the project over budget.

The psychology behind making design decisions of that type basically indicates a lack of confidence.  The end-user chooses to ignore the ASHRAE climactic weather data and recommended design points because he or she lacks confidence in the data.  Personal experience of temperatures that exceed the published design conditions add to the lack of confidence in the recognized standard.  ASHRAE has tried to address this by also publishing the 10, 20, and 50 year maximum (or minimum) recorded temperatures.

This criteria is similar to the "100 year flood" criteria that civil and site planning engineers use.  Many of us have seen, or experienced, times when the "100 year flood" line has not only been crossed but crossed multiple times.  At a recent meeting that I chaired we had a presentation by a well regarded environmental and site planning engineer.  The presentation showed how the location of various coastal high water design lines have changed over the last few years...moving further inland and changing the flood insurance status of existing structures that were originally well outside of the potential flood area.

Could it be that the climate is actually changing as many people suggest?  Do we need to revisit our temperature design criteria more often?  The alternative is to ignore the standard and add an arbitrary "risk premium" to the design criteria...adding costs that might not be necessary.

When I was a young consulting engineer many years ago I was told to design to the ASHRAE design points.  One reason was that by using a recognized standard I could always fall back on that point as evidence that I had used proper engineering practices in my design.  The nature of mechanical equipment was such that most systems ended up over-sized anyway and could throttle their performance to meet the criteria.  If owners today add a "risk premium" to their design criteria...and then the mechanical equipment also ends up over-sized...then the capital costs and system capacities are doubly over stated.  As we move towards a market where building operating characteristics are posted by the front door, much like an automobile's gas mileage rating, the practice of arbitrarily over-sizing systems will put some owners at a disadvantage when it comes time to lease the space.


So You Think Your Critical Cooling System Is Reliable?

An HVAC system with 3 components in series

System Reliability Versus System Complexity

 
I just read an article in Engineered Systems magazine that reminded me of one of my own blog postings from a few months ago.  The difference is that the author in the ES magazine article went all mathematical on us and showed the formulas for calculating HVAC system reliability given the reliability of the individual components in the system.  Although it was a long time ago I remember going through the mathematical exercise in one of my engineering courses back at the University of Texas...so we have all known about this procedure for a very long time.

During this same week I have been asked to do a competitive analysis on a "new" system concept compared to one of our systems.  While I could name names that is not necessarily the important point of this posting.  What struck me about the competitor's "new" system concept was just how many parts were required to accomplish the task of providing "free cooling".  Many of those component parts had dependencies that meant that the proper reliability analysis was the "series" analysis.  You can refer to the latest issue of Engineered Systems if you don't remember what that means but in keeping with my simple approach to my postings the bottom line is that the reliability of a "series" of components is the compounded product of each item's reliability multiplied together.  In other words if the "new" system concept required 4 compressors that are staged in series, a direct drive exhaust fan array, a direct drive supply fan array, a sensible heat wheel motor, sensible heat wheel belt, sensible heat wheel bearings, digital control module, etc...and we gave each of those items a reliability of 98% (which sounds pretty good and is generous for some of the items in the chain)...then the "system" reliability would be:
 
.98 x .98 x .98 x .98 (compressor section) x .98 (exhaust fan) x .98 (supply fan) x .98 (sensible heat wheel motor) x .98 (sensible heat wheel belt) x .98 (sensible heat wheel bearings) x .98 (digital control module) = .817

So, the "new" system concept under this scenario would actually have a reliability of only 81.7%...not quite so good I think you would agree. 

The information from this example is actually directly from the competitor's product literature...and I left out some components for simplicity sake.  The reliability of the components at .98 was an estimate and you can plug in whatever numbers you think are accurate.  The important thing is to recognize that the more complex the system is the lower the reliability will be.

In a previous blog posting I quoted Albert Einstein who said ..."make everything as simple as possible, but not simpler.".  I still think Albert was a pretty smart guy and when I look at some of the design solutions being proposed today for data centers, pharmaceutical warehouses, or cooling in general I just have to wonder why we sometimes design such complex solutions. 

Remember..."it is not sustainable if it is not maintainable"...and, as a corollary to that statement, "it is not maintainable if it is too complex and has too many parts".

Shading and Make-Up for Building Designers

I just returned from a meeting in Florida and I was reminded of a couple of basic concepts that apply to virtually all building designs.

Our meeting room faced an outside wall with a couple of French doors to a nice patio area.  The weather was unusually cool for Florida and everyone sitting on that side of the table was able to experience that coolness first hand...even though the doors were closed.  During breaks the smokers in the group would gather on the patio and, again, in spite of closed doors the meeting room started to smell of cigarette smoke.

The problem, of course, was a lower pressure in the meeting room compared to the outdoors.  Somewhere in the conference center there was an exhaust system churning away without a counterbalancing make-up air system.  The only way that the exhaust system could satisfy its demand for air was to pull that air from outside the building, through the conference room, to its final point of exit.  Cold, smoke-laden air was drawn into the meeting room and occupant comfort was compromised.  Simply adding a make-up air system similar to the Applied Air DFL-series would have improved the indoor environment and cost very little extra to operate.  Remember that all of that cold air that was being sucked into the building caused the occupants to raise the thermostat set-point in order to compensate for being cold and forced the large main air handlers to operate for more hours than necessary.  Maintaining a positive pressure in buildings controls infiltration of smoke, dust, and un-tempered air and it is relatively simple to achieve.

The other basic concept that popped into my head is how important the building envelope is to controlling operating costs.  This particular resort was built many years ago but employed some pretty effective passive shading for the guest rooms.  My room had a wall of windows for natural light and a view but had a deep setback that prevented direct solar radiation.  This deep setback meant that the air conditioning system would see far fewer operating hours than an unprotected glass exposure would allow.  Since solar radiation is also a significant portion of the building cooling load the setbacks allowed a reduction in HVAC equipment size as well.

Building design has changed since the days when this hotel was built and deep setbacks are much less common.  But effective solar shading is still feasible through the use of external shades and louvers.  External shade technology has advanced to the point where it is possible for the shades to track the location of the sun and automatically provide continuous reduction in solar radiation.  Some external shading systems such as those developed by Colt Group actually contain photovoltaic cells that can reduce the building electrical demand by more than providing shading alone.

So two basic concepts for sustainable building design:  maintaining a positive indoor pressure to eliminate unwanted and untreated outside air from entering the occupied areas; and using modern external shading technology to reduce the solar load in the occupied areas which, in turn, reduces operating and capital costs.

The Changing Face of Real Estate

One of my more enjoyable activities that I have is to act as chairman for a developers forum as part of the NAIOP organization.  This activity provides insights into the thinking, planning, and expectations of commercial and industrial property developers and owners across North America.  At our annual meeting a couple of months ago there were many presentations and discussions that focused on 2013 and beyond.  I thought I would share just a few of the points from that meeting.

The NAIOP Research directors provided some interesting factors to consider going forward that tended to revolve around the way technology is changing the office and industrial markets.  E-commerce is projected to have a negative impact on mom-and-pop retail and small start-up retailers until the housing market makes a big recovery, according to Cassidy Turley-Terranomics.  They went on to say that while middle market retailers will continue to struggle, the luxury and discount retailers will continue to expand and open new retail and distribution facilities. 

Speaking of distribution facilities, Jones Lang LasSalle indicated that they believe that distribution center users will continue to push for higher bays...up to at least 36 clear feet...in order to increase efficiencies in handling e-commerce transactions.  Another interesting impact of e-commerce that was highlighted by IMS Worldwide and by Liberty Property Trust is that changing real estate requirement for an e-commerce focused distribution center.  The number of transactions per day in an e-commerce site can be 10 times greater than for a traditional distribution center.  Each of those transactions must be touched by someone so the number of employees in an e-commerce center is much higher.  Parking for up to 1,000 cars in addition to trucks means the land required for these centers can be 40 or 50 acres greater than a comparable "traditional" distribution center.  Implied in this scenario is also the need for a temperature controlled work environment for those 1,000 workers instead of the old "just keep the pipes from freezing" distribution or cross-dock environment.

Another impact of technology and e-commerce is that a DC ("distribution center") for e-commerce has an element of "mission critical" to it in order to process all of the transactions.  Developers and users of these new types of distribution centers look for locations that have reliable fiber optic and cable network access, as well as dual primary power substations in order to minimize downtime in the event of a power disruption.  Other location related decision criteria include being in a right-to-work state and in a state that does not charge sales tax on e-commerce transactions.

Shifting back to the office market, CBRE-Canada, noted that employees are changing how they work and the traditional office with walls is going away.  They also noted that employees, especially the younger ones, communicate with each other by text message versus phone reducing the "noise level" in the office down to the clicking of small touchscreens...reducing the need for walls to control cross conversations.

PPR/CoStar commented that the average lease that they see in the office market has decreased from 5,000 square feet to 3,600 square feet.  This statistic is reinforced by the results of a CoreNet survey of 500 corporate real estate executives who have changed their office plan metric from 225 square feet per employee down to 175 square feet in 2012 with a projection of only 150 square feet by 2017.  This change means that development of new buildings will continue to be pressured as it will take longer to absorb space in overbuilt markets. 

The final point from the annual meeting is that while there is abundant capital available for the right deal all of these other factors are driving developers to spend that capital on remodeling and repurposing of existing space. 

How I Spent My Summer Vacation

It has been a while since I have posted anything to this blog...no, I was not on sabbatical on some desert island...I have been traveling around North America talking to consulting engineers, contractors, and data center owners and operators.  This posting just provides a few insights that I garnered over the last 2 months on the road.

First, the data center/mission critical market continues to occupy the minds and the design resources of many, many companies in the design community.  It is clear that this is a market segment that is vibrant and all indications are that it will continue to be for quite some time to come.  The latest issue of Datacenter Dynamics FOCUS indicated that the world is now consuming over 300 Tkwh annually to drive data centers, with the US consuming over 25 Tkwh alone.  The consumption in the US is projected to grow over 9% in 2013.  While this information points to a growing market it also points to the urgent need for improved operating efficiency in data centers.

Second, and related to the first item, is the lack of knowledge about new "best practices" in data center design.  I have talked to dozens of engineers, contractors, and data center people who are not aware of the latest design guidelines from ASHRAE.  In fairness, those guidelines were only officially announced a few weeks ago...but they have been rumored and discussed for about a year now.  I mentioned in one of my earlier posts that education of the design community is an important, and ongoing, task.  This has been reinforced to me over the last 2 months.

Third, for those engineers and contractors who understand and embrace the new standards, is the challenge of convincing the data center people to adopt those standards.  This is less of a problem at the top levels of the data center company than it is on the floor of the data center.  The IT equipment operators who live in "the white space" seem not to understand the allowable operating temperatures of the equipment that they manage every day.  I have heard many different reasons for their reluctance to adopt the new best practices but I think it comes down to fear.  Because of stringent SLAs the operators worry about losing any equipment for any period of time...even though there is mounting research that this fear is unfounded.

Fourth, I have heard of several cases where the local electric utility has started to put limits on the available service capacity for planned centers.  In the US we are so comfortable with the idea that our electric grid can provide unlimited power that we forget that is not true.  We have a fixed number of powerplants with only so much generating capacity.  With the tremendous growth of data centers, and data centers with 300 to 500 watt per square foot electrical demands, there is a limit to what a utility can do.  And timing is another element of the equation.  A data center can be built in a matter of months...a powerplant takes years.  So even when a utility sees the demand coming they cannot add capacity as quickly as the demand can be added.

So, these are a few observations from the last couple of months.  Of course there is more to the story and feel free to comment on this post with any questions you might have.  I will try to respond as quickly as possible.

Too Hot to Handle? A Simple Reminder

Well, this is embarrasing.  I have been in the HVAC industry for over 40 years now and have helped design and manufacture some of the more sophisticated products that have been introduced.  But, in spite of that I have to admit that I messed up.  And the lesson that I was reminded of can help you too if your residential, commercial, or mission critical system is struggling to keep up with the heat.

Over the past couple of weeks the temperature here in Texas has been over 100 degrees F every day...sometimes up around 105 to 110.  That is nothing unusual for Texas in the summer and not as bad as last year.  But I started to notice that my residential HVAC unit was no longer able to maintain my thermostat setpoint of 77 to 79 degrees F.  The system was consistently running 3 degrees behind and running non-stop...and was only installed a year ago.

Refrigerant leak?  Undersized?  Dog left the door open?

No...it was one of the most common problems in any HVAC system that is not running correctly...the condenser coil was coated with a fine film of dirt.  Let me repeat that...a FINE film of dirt.  Not clogged...not even very obvious at a quick glance...a FINE film.  In my case it was actually a fine film of dryer lint since the clothes dryer outlet was located behind the condensing unit...but the point is that had a service tech not looked at the coil with a flashlight I never would have noticed the dirt.  Running water over the coil from a garden house to wash off the film dropped the system head pressure and restored the system's ability to maintain the thermostat setpoint without running non-stop.

Many years ago Louisiana State University conducted some tests on residential HVAC systems to determine the impact of dirty condensing coils.  The results were eye-opening.  A fine film of dirt, similar to what I had on my system, would reduce system capacity by up to 20%.  If your home, business, or server room is too hot then imagine what giving it an extra 20% of capacity could do...and it would only cost you a bit of water and time to wash off the coils...with no service tech assistance required.

Availability

In the data center world there are several metrics related to “up time”.  You hear terms like SLA (“Service Level Agreement”) that define how many hours out of 100,000 that the servers are guaranteed to be up and running.  Data center people like to talk about SLAs that are “4-9s” or “5-9s”.  “4-9s” would be 0.9999…or the servers will be up 99.99% of the time.

There are a couple of other metrics that are directly related to the cooling equipment.  One is MTBF or “Mean Time Between Failures”.  Another is MTTR or “Mean Time to Repair”.  A third metric is the most meaningful and it is “Availability”.  “Availability” is a measure of how many hours out of 100,000 that the system will be available when you consider MTBF, MTTR, and routine service.
The formula for Availability is:    MTBF/(MTBF+MTTR)

When evaluating the Aztec indirect evaporative cooling unit and its components for a recent data center project; using MTBF and MTTR values from the Aztec Engineering Department, Technical Service Department, and Production Departments; the following “Availability” numbers can be derived:
  • For routine maintenance of evaporative media the “Availability” is 0.999871
  • For the MTBF for the pumps and motors the “Availability” is 0.9999333 to 0.9999555
Since the routine maintenance “Availability” is one that can be planned in a way that will not disrupt the overall “Availability” of an N+1 ,or better, facility it really doesn’t matter that it is only “3-9s”.  In the cases where a failure might occur (MTBF cases) the typical Aztec product is “4-9s” across the board.

The Aztec indirect evaporative cooling unit can achieve these high levels of "availability" due to the inherent simplicity of a typical evaporative cooling system.  Fans and pumps are considered to be the only significant components in an evaporative cooling system that can fail.  In the case of the Aztec product these components are selected for an expected life of 200,000 hours...probably far longer than the building itself will be used for its original purpose.

A final consideration that was reviewed during this analysis was the skill level required for each repair or maintenance task.  Although this factor cannot be included in a typical metric such as MTTR it is an important factor for the building owner to consider.  Since an evaporative cooling unit such as the Aztec unit contains no refrigerants the vast majority of tasks can be accomplished by what would traditionally be called facilities maintenance personnel.  No special licensing would be required.  It actually turns out that some of the smallest elements of the system are the only ones that might require a licensed service technician.  Replacing contactors and relays in the unit control and power circuits would most likely require a licensed electrical service technician.

In general the "availability" of an evaporative cooling system, such as the Aztec system, will be at least as high as any competing technology and, likely, higher.


Direct Evaporative Cooling Analysis for Two Diverse Climates

One of the common concerns expressed about the use of evaporative cooling for data centers, server rooms, telecom facilities, or other facilities housing heat producing electronics is the ability of evaporative cooling to achieve the target inlet conditions for the electronics.

These two psychrometric charts show the results of an actual analysis in two distinctly different climates.  The target server inlet temperatures were between 65 and 85 degrees F and between 20% and 80% RH.  The mechanical system criteria mandated that direct evaporative cooling be used.

The proposed mechanical system consisted of a direct evaporative cooling system with 12" cellulose media, a steam humidifier, DDC controls, and a hot aisle/outside air mixing section.  The controls would be configured to modulate the outside air dampers, hot aisle dampers, evap media water flow, and the humidifier to maintain the target conditions.

Server Inlet Temperatures from Evaporative Cooling System in Pacific Northwest US

Server Inlet Temperatures from Evaporative Cooling System in Southeastern US


As you can see from the charts the proposed system would easily achieve the desired results.  In fact, it was found that outside air cooling could achieve the targets during roughly 15% of the year, direct evaporative cooling could achieve the results during roughly 60% of the year, and remaining hours of the year when the air was too cold to properly operate the evaporative cooling without fear of freezing a combination of hot aisle and outside air supplemented with the humidifier would hit the target.  The only excursions of temperature over the maximum target of 85 degrees would occur for no more than 5 hours a year based on the NOAA TMY2 weather history.

While not every location would achieve these results the diversity of climate for these two studies imply that direct evaporative cooling, with creative use of mixing and controls, will work in many more climates than most people expect.  To further expand the capabilities of evaporative cooling an indirect evaporative cooling element could be provided to produce even broader temperature control.  Although it was not analyzed for this case we would expect that adding an indirect evaporative cooling element to these systems would have eliminated the few hours of excursion above the maximum dry bulb temperature.

As a means to dramatically reduce energy consumption for these data modules the evaporative cooling solutions such as those manufactured by the Aztec division of Mestek would prove to be extremely effective.  Since evaporative cooling systems are also relatively simple mechanical systems with no refrigerants maintenance of the systems does not require licensed refrigeration technicians and there are very few elements that could fail...increasing uptime and providing much faster recovery time should a repair be necessary.

"Make everything as simple as possible, but not simpler."

I have addressed this topic before but it bears discussing again.  I was reading an article in a high tech blog the other day and they repeated the oft quoted "rule" of good design from Albert Einstein..."make everything as simple as possible, but not simpler."...  A few months ago I also quoted an engineer who reminded me that a system is not "sustainable" if it is not "maintainable".

It seems that in spite of these two pieces of advice, and numerous studies that highlight efficiency degradation when equipment is not properly maintained, we continue to see elaborate custom cooling solutions when a simple "off the shelf" product will accomplish the same thing...and has a better chance of staying that way.

As an industry we bemoan the lack of qualified service technicians and then we turn around and send them to jobsites populated with unique, one of a kind, complicated HVAC solutions.  What are we thinking?

I will admit that there are some cases that are so difficult to solve that something special is truly needed.  Critical human medical care might apply.  Some very high tech product production might apply.  Production of pharmaceuticals might apply.  But most server rooms and data centers no longer seem to apply.  ASHRAE and the server manufacturers themselves have said that the old ways no longer apply.  IT equipment can stand much higher temperatures and humidities than previously thought and much broader swings of those measures than ever before.  So why design around complex custom equipment?

As a manufacturer we know, and can pretty accurately predict, how a standard piece of equipment will perform in any given situation.  As soon as we are asked to "change it just a little"...which normally actually means throwing out the original design and starting over...then all bets are off.  We can use the same standard of components that we would normally use with an expectation of similar performance but, in reality, we no longer know exactly what to expect.

And then there is the issue of compliance with the myriad of agency and code safety tests that all manufacturers must apply to their equipment.  Standard equipment is designed, tested, and certified to meet those standards...custom equipment is designed to the standards but is probably not tested and certified to the standards.

And finally we have the issue of maintainability.  Service technicians are trained to work on specific types of equipment.  Many types of standard equipment require licensed technicians for service.  Given the broad range of equipment types in the market today it would be extremely rare to find a service technician who could be proficient on all standard equipment....much less something he or she has never seen before.

The topic of "total cost of ownership" is starting to pop again in some publications.  It is reassuring to see that some people are starting to go back to considering something beyond the initial capital expense...but operating expenses consist of more than just energy costs...remember the cost of maintaining the mechanical system in the long run so that the money spent up front for an efficient solution does not go out the window a couple of years down the road.

Green Grid Updates Free Cooling Maps for Data Centers

The Green Grid has released White Paper #46 as an update to their "free cooling" maps for data center design and operation.  The research was edited by Emerson Network Power, Intel, and Schneider Electric. 

The reason for this update to the "free cooling" maps was the latest changes to the ASHRAE TC 9.9 operating/design guidelines for data centers.  For those who have not yet seen those new guidelines they allow a much larger operating range for data centers and server rooms that use some of the latest equipment from companies like Dell and HP.

For those of us who are "metric challenged" 40 degrees C = 104 degrees F and 35 degrees C = 95 degrees F.

When you consider that many data center operators still seem to want their rooms at 70 degrees or lower it is clear that these new criteria are a massive change in operation and design concepts.  It is also clear that adopting the newest guidelines can result in enormous energy savings.

The Green Grid paper includes a couple of maps to quickly illustrate how extensive the potential for "free cooling" has become under the latest operating/design guidelines. In these maps the darker the blue color the more hours that "free cooling" could be employed.  The darkest color blue indicates that all 8760 hours are suitable for "free cooling".  The maps also consider the coincident dewpoint temperatures as that metric is important also.

This first map is for ASHRAE Class A3 environments and shows that virtually all of North America could have their data centers cooled without using chillers or compressors.  The second map is for ASHRAE Class A2 environments and shows that roughly 80% of North America could still be cooled most of the year with no chillers or compressors.

The question for data center operators and designers who want to implement these new temperatures is what to do about those 500 or 1,000 hours when the outside air conditions are not quite right.

It is still quite possible to operate the center with no compressors or chillers if the designer will incorporate an evaporative cooling system such as the Aztec indirect evaporative cooling system or even the Alton direct evaporative cooling system.

Since evaporative cooling systems operate using 100% outside air all the time they make an excellent "hybrid" approach.  During the many hours of the year when "free cooling" will satisfy the conditions either type of evaporative cooling system will provide cool, filtered, outside air.  The Aztec indirect evaporative cooling system has the added advantage of allowing recirculation of hot aisle air during the very coldest months when "free cooling" could actually over-cool the data center.

During those few hours of the year, however, when it is simply too warm for "free cooling" to work, the Aztec or Alton systems can automatically initiate their evaporative cooling cycles and trim the outside air temperatures down to levels that fall well within the new ASHRAE guidelines...again, with no compressor or chiller energy required.  The air leaving the evaporative cooling system will usually be about 3 degrees F higher than the wet bulb temperature.  This chart should give you an idea of the potential air temperature that an evaporative cooling system can provide.

The Green Grid whitepaper is just the latest in a growing number of research papers and documents that point operators and designers in a direction that can save tens of thousands of dollars and kwh if they are willing to make the investment in the latest technologies from both the IT equipment manufacturers and the HVAC equipment manufacturers.


How We Used To Do It

I was recently reading an engineering magazine article (I know, I need to get a life) and came across a question that set me to thinking..."how did people stay cool before we had chillers?".  After all, in the grand scheme of life we have only had chillers and air conditioning systems for a very short time.  So what did people do before those things existed and what can we learn from that?

One of the first lessons from the past is that hot air rises.  Seems obvious doesn't it?  Believe it or not there is actually a company that is successfully convincing people that by making their air even hotter than everyone else they can do a better job of keeping people comfortable from 20 or 30 feet above them.  But that is a different story for another time.

Stack Effect
Because people realized that hot air rises, many early structures in very warm climates would be built with very high roof lines.  This would allow the hottest air to stay above the people and increase their comfort.  Many of those structures would also have vents or openings at the highest point of the roof so that the hot air could escape.  As that hot air left the structure it would be replaced by cooler outside air near the floor level.  A continuous circulation pattern would develop that kept the "cooling cycle" going.  The taller the structure, and the hotter the air, the faster this cycle would operate.  Today, we call that phenomenon "stack effect" and you see it in every tall building elevator shaft in the world.  You also see it in chimneys for residences.

After the invention of air conditioning though we seem to have forgotten one of the key elements of this natural cooling cycle...venting the hot air out of the building.  In most modern air conditioned buildings we keep the hottest air inside the building and just keep cooling it back down in a constant cycle that requires compressor or chiller energy.  In many cases the hot air inside the building is still cooler than the hot air outside the building so this might make sense during the hottest months of the year.  However, in the case of a data center or server room, the hot aisle air is usually much hotter than the air outside...but most data centers use cooling equipment that constantly tries to cool down that hot aisle air resulting in huge energy consumption.

Some systems also take advantage of the "stack effect" in a shorter building by recognizing that any heat source in the space will create it's own "mini stack effect".  Cooler air will be drawn towards the heat source and the hot air above the heat source can be exhausted.  This creates some natural circulation in the space and is one of the key principles behind "displacement ventilation".

Another lesson from the past is that evaporating water will make air cooler.  We actually use that very same principle in modern chiller systems that include a cooling tower.  The cooling tower is nothing more than a very large evaporative cooler.  In the old days people would use wet cloths or reeds in a window opening and when air entered the building through those wet items (probably accelerated by the building "stack effect") the entering air would get cooler and the people would be more comfortable.  Today there are many types and sizes of evaporative coolers available, such as those from Alton and Aztec divisions of Mestek, and they work even better than those primitive early methods.  But no compressor or chiller energy is required.

Of course there are building construction techniques that are also based on lessons from the past.  Positioning a building so that the smallest outside wall area is the one that sees the most sun will help keep the occupants cooler.  Using "thermal mass"...thick, heavy, walls...can also keep occupants cooler by storing cool night air energy in the wall and releasing it slowly during the hottest part of the day.  Again, we often build very light weight buildings today and try to compensate by adding insulation but nothing beats two feet of solid rock.  Some architects are working to revive this technique and research is continuing on using chemical treatments on walls and ceilings that allow them to store energy longer.  One case where creating a lot of "thermal mass" might not be such a good idea is in the data center world.  Depending upon how the hot aisle air is handled it might actually be a good idea to make the walls very thin so that the heat can escape to the outside through the walls.  Finally, the use of shades and window coverings is also a key lesson from the past.  Some companies, such as the American Warming division of Mestek, offer exterior solar shades that actually track the position of the sun and change angle in order to maximize the shading effect.

There are many other lessons from the past that could be discussed but the key is to stop and think about how we used to do things.  Sometimes adapting ideas from the past to ideas from today can result in the best overall solution.
Prineville Server Farm with "free cooling"
Data Center Dynamics is an international organization with a single mission of sharing best practices among data center designers and operators around the world.  The organization publishes a trade magazine called "Focus" and they have just released their January, 2012 edition.  This edition is a retrospective look at 2011.

One of the articles included comments from some of the industry's leading players in response to two questions:  "What was the most important data center development of 2011?" and "What single advancement will most positively impact the data center sector in 2012?"

Some of the responses were:

Bill Kosik; Principal data center energy technologist, HP Enterprise Business Technology Services:

"For the first time in 2011, many of our clients wanted to implement a design temperature of 75 degrees F for the inlet air to the IT equipment."  "When you couple increased supply air temperatures with ultra-efficient air-conditioning equipment (indirect evaporative cooling as an example), you start to see PUEs drop into the low 1.2s/upper 1.1s..."

Andrew Donoghue; Analyst, The 451 Group:

"ASHRAE released a white paper....redefined and reclassified new allowable ranges up to 113 degrees F.  Higher operating temperatures could mean that new facilities can be built without the need for expensive cooling technology, such as mechanical chillers."

Dileep Bhandarkar; distinguished engineer, Global Foundation Services, Microsoft:

"Broad recognition across the industry that free air cooling technology is now considered mainstream."

Jim Hearnden; Product technologist, data center power and cooling, Dell Services:

"Newer technology will permit higher server intake temperatures, which will be a great step forward in 2012."

The common thread through all of the comments is the drive to lower energy costs by raising server inlet temperatures.  Most of the more advanced companies are even going to the point of using 100% outside air with no tempering at all.  Aztec indirect evaporative cooling systems from Mestex, a division of Mestek, offer an alternative that filters and cools the air down to within 2 degrees of the wet bulb temperature (usually in the 70 to 80 degree range).  This allows the designer and operator to have acceptable server inlet temperatures and still have a very low PUE.  For installations that still need some degree of control over the air temperature and desire filtered, clean, air this might be the best solution.

Ten Reasons to Tone Down on Climate Control

Sometimes the best thing to do is to acknowledge when someone else does something right.  In this case Nicholas Greene posted an article on TechAxcess that sums up the ten reasons why data center designers and operators should adopt modern cooling design criteria for their centers.  Nicholas clearly articulates the reasons and I cannot improve on what he says.

I can only reinforce the message that there are alternative cooling methods that can provide cool...not cold...filtered, clean, air for data centers.  Aztec Indirect Evaporative Cooling Systems from the Mestex division of Mestek can send 65 to 80 degree air to the cold aisle and allow the designer to exhaust 100% of the heat from the hot aisle without using any refrigerants or compressors.  The result is cold aisle conditions that meet the latest ASHRAE TC 9.9 criteria and that address the issues that Nicholas covered in his article.

As Nicholas says...it is time for operators and designers outside of the "big names" to get on board and start implementing these energy saving technologies.

Air Pollution and HVAC

Over the Christmas/New Years holiday break I was able to spend some time on the road crossing Texas, New Mexico, and parts of Arizona.  While I saw plenty of interesting sights it is not the goal of this blog to create a travel channel.  The goal is to highlight technologies and subjects of interest in the HVAC arena.

The subject that came to mind as I drove across these states was that of indoor air quality.  Two areas, in particular, raised my attention to this topic.  Phoenix, Arizona and El Paso, Texas were both covered with a thick layer of smog as I passed through those towns.  The climatic reasons are not all that relevant to this discussion but the "temperature inversions" that are common in those areas at certain times of the year mean that smog will develop and stay trapped for hours, if not days.  But those two cities are not alone.  Los Angeles, California has been well known for poor air quality for years.  New York City leaders have become concerned enough about outside air quality to include provisions in their new "Green Codes" that are intended to address the issue.  Finally, attention has been brought by the folks at NOAA to the fact that pollution in China eventually makes it way to the US on the jet stream.

HVAC products can either help mitigate this problem or simply move it from the outdoors to the indoors.  All buildings with occupants are required by building codes to have some amount of "ventilation air".  It has been common practice to introduce that ventilation air through conventional air handlers or packaged rooftop equipment.  In the vast majority of cases that equipment was designed, and is applied, with only the minimum level of air filtration included.  The primary goal of the filtration has been to protect the components of the equipment from dust fouling and to provide a nominal level of indoor air quality improvement.  New requirements and guidelines that specify MERV 11 and higher filtration are intended to let the equipment begin to mitigate the outdoor air quality before it enters the space.  But how effective is this?

In a conventional HVAC system design there will be dozens of these filters, if not hundreds, scattered all over the building in numerous air handlers or packaged units.  Maintaining all of these filters properly becomes an ongoing task.  In addition, if even better filtration is required, or desired, the average piece of HVAC equipment simply lacks the space to provide more filtration.

Dedicated Outdoor Air Systems, or "DOAS", equipment helps address this.  By isolating all of the ventilation air requirements into a single point, maintenance of the filtration system becomes much easier.  In addition, some "DOAS" equipment, such as the Applied Air FAP, is designed to allow multiple stages of filtration.  When combined with low airflow systems such as chilled beams the result can be very clean ventilation air even in areas such as those I drove through over the holidays.

As a final consideration for indoor air quality I would suggest that the old, ancient actually, technology of adiabatic or evaporative cooling might be considered.  Although adiabatic or evaporative cooling can provide effective temperature control in vast parts of the United States it can also provide an extremely effective filtering system as well.  Air is literally "washed" as it passes through the unit.  As part of an overall system where the adiabatic or evaporative cooling system, such as the Alton or Aztec products, only provides the ventilation air and other equipment handles sensible and latent cooling the improvement in indoor air quality could be dramatic.

Using CFD in the HVAC Industry

One of the most useful design analysis tools now being used in the HVAC industry is something called "CFD".  "CFD" stands for "Computational Fluid Dynamics" and it provides insights into potential building performance that no other analysis tool can provide.  While CFD software has been in the marketplace for over 20 years it is still fairly rare in the HVAC environment.  CFD was originally used in the aerospace and fluid process industries.  It is now the number one analysis tool used in Formula One and automotive design in general.  CFD is also now widely used in the data center design community.

The reason that CFD is so widely used in those high technology industries is that CFD allows the designer to "see" air in a space...it's temperature, it's direction, it's velocity, and even it's density and moisture content in some cases.  Mestex has been using CFD for years to help engineers and building designers understand the best product for their building and the best place to locate that product. 



By creating a 3D model of a building and it's contents, and then adding a model of the proposed cooling or heating product, the CFD user can evaluate just how well the proposed system will satisfy the design requirements.  More sophisticated versions of CFD software, such as the software used by Mestex, can actually help the designer optimize his application by letting the computer modify the equipment location and operating conditions to get a result that is as close to the design target as possible.

CFD studies can be very complicated, and might take several days to complete, but the end result is so valuable that it is worth the time.  Mestex recently completed a study for a pharmaceutical company that assured the company that their multi-million dollar inventory would be kept at the proper temperature anywhere in their warehouse.  No other analysis tool can provide that type of owner security.