Attention, Data Center Managers Copy These Innovations

From cooling tips to advice on where to locate your data center, tech giants share the lessons learned during recent builds

Data center decisions are never easy, no matter what the size of your company. When it comes to making the most of your facility, why not follow the lead of the big players? Computerworld talked to executives at some of the tech industry's largest companies to find out how they are innovating in brand-new data centers, including one that Google built in Belgium and Cisco's new state-of-the-art facility in Texas. Intel and Yahoo also weighed in with their best practices.

Google operates "dozens" of data centers all over the world. The firm's primary focus is on making its data centers more efficient than industry averages, says Bill Weihl, green energy czar at Google. According to EPA estimates, many data centers run at an efficiency of around 2.0 PUE (power usage effectiveness), meaning they use twice as much energy as they actually need. The PUE is the total energy consumed by all data centers within a company, divided by the energy consumed by IT.

Google, for its part, runs at around 1.18 PUE across all its data centers, Weihl says. One of the ways Google has become more efficient is by using so-called "free cooling" for its data centers.

"We manage airflow in our facilities to avoid any mixing of hot and cold air. We have reduced the overall costs of cooling a typical data center by 85%," Weihl says, adding that the reduction comes from a combination of new cooling techniques and power back-up methods described below. The average cold-aisle temperature in Google's data centers is 80 degrees instead of the typical 70 or below. (Hot-aisle temperature varies based on the equipment used; Google would not elaborate on specifics about hot aisle temperatures or name specific equipment.)

Further, Google uses evaporative cooling towers in every data center, including its new facility in Belgium, according to Weihl. The towers push hot water to top of the tower through a material that causes faster evaporation. During this period of evaporative cooling, the chillers used to cool the data center are not needed or used as often. 

"We have data centers all around the world, in Oregon where the climate is cool and dry and in the southwestern and midwest part of the U.S. Climates are all different -- some are warmer and wetter, but we rely on evaporative cooling almost all of the time," he says.


Weihl says the data center in Belgium, which opened in early 2010, does not even have back-up chillers, relying instead on evaporative cooling. He says it would be a "100-year event" when the data center would not need evaporative cooling, so Google chose to forgo back-up chillers to reduce the facility's electrical load. The data center runs at maximum load most of the time, he says. On some infrequent hot days administrators put a few servers on idle or turn them off.

He advises companies to look seriously at "free cooling" technologies in the data center such as the evaporative cooling towers described above. Another option is to use towers to redirect outside air to servers, then allow the server temperatures to rise within acceptable ranges and use less direct cooling on the racks.

In terms of power management, Google uses voltage-transformation techniques that step AC power down to DC voltage. Google also uses local back-up power -- essentially, a battery on each server -- instead of a traditional UPS, mostly due to the AC-to-DC conversion process.

source: computerworld

No comments:

Post a Comment