How to break PUE barriers in a 20-storey data centre

by

30 September 2017
Ed Ansett at DCD Zettastructure

Singapore is looking into the feasibility of building a 20-storey data centre with a PUE (Power Usage Effectiveness) of about 1.1 - a 15-20 per cent improvement over the current lower PUE limit of 1.4 - and an IT load of 50 MW.

But is it realistic to get a PUE of 1.1 in Singapore?

Speaking at the recent DatacentreDynamics event Zettastructure Singapore, Ed Ansett, co-founder and chairman of I3 Solutions Group, outlined some of the design considerations and challenges involved in meeting the specifications for a 20-storey data centre.

PUE helps determine how energy efficient data centres are. It is the ratio of total amount of energy used by a computer data centre facility to the energy delivered to computing equipment. On paper, the higher the PUE, the less energy efficient a facility is.  

The 35 degree question

In Ansett’s view, there is only one way to achieve a PUE of 1.1 in Singapore, and that is to allow data centres to operate at a much higher temperature. “If we can operate with supply air at 35 deg C, we will have no problem. No need to do any cooling.”

Cooling takes centre stage in many PUE discussions because it is the greatest energy guzzler in data centres, commandeering up to 50 per cent of operating costs.

According to Ansett, there is no reason not to operate at 35 degrees. The operating conditions indicated in catalogues of major equipment manufacturers indicate that their hardware will operate fine at 35 deg C. “Don’t go to a mid-range server. Go to a high-end mission critical server specification … and at 35 deg, 40 deg C, they are all fine.”

“So why are we dealing with the allowable limit of 32 degrees?”

Guidelines from ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers) put the recommended limit for data centre operations at 27 deg C, and the upper allowable limit at 32 deg C.

Disproving ASHRAE

Singapore is looking to test these limits with an ongoing trial green tropical data centre project which was set up as part of recommendations under the Green Data Centre Roadmap for the government.

But Ansett believes this is not enough. The trial needs to be run at a much bigger scale if it wants to disprove ASHRAE and, more importantly, convince data centre customers that their equipment will operate fine under the higher temperatures.  “We need to prove that hundreds or thousands of servers can run comfortably at 35 deg. We need the science to justify to the world that services can operate with inlet temperatures of 35 degrees.”

The other big question that has to be addressed, he said, is whether a PUE of 1.1 is needed for the whole data centre or just a part of it.

As for the design challenges surrounding cooling, Ansett pointed out that the data centre project is likely to require a bespoke cooling plant. “It will not have the roof or surface area to do traditional cooling, so automatically we have a problem. Where do we put the cooling plant?”

A different cooling method may have to be used, said Ansett, as he discussed options ranging from air cooling to the use of storm water to thermoclines and indirect adiabatic cooling. A good option, he said, is the use of chilled waste water produced by liquid nitrogen gas plants. Air Cooled

IT load considerations

Looking at the IT load specifications of 50 MW – or an average of 2.5 MW per floor for 20 floors assuming that all are data floors, Ansett’s view is that it is “probably more financially viable” to go for 100 MW. “If you are going to spend money on infrastructure, you only want to build it once, but build it bigger.”

But size is one thing, flexibility is another. The prospective tenants for the data centre could range from banks and other enterprises which would be at the bottom of the scale in terms of density – maybe 3-5 kW racks – to hyperscale customers which may typically require 10kW racks, to supercomputing customers which could easily require 30 kW per rack. Each of these will need a different type of topology, and that presents serious design challenges, said Ansett.

Some other data centre design considerations covered by Ansett in his presentation included:

Power sources: Ansett thinks solid oxide fuel cells are “a really good solution” and would like to see the world’s first data centre with no electricity supply.

Emergency power and distribution: “Personally I would prefer no generators,” he said. With the zoning approach adopted by utilities suppliers, the data centre could tap into diverse zones for emergency power and distribution. "The chances of simultaneously losing all zones will be negligible. So instead of filling up the space with standby generators, we could have multiple zones supplying to the data centre.”

Uninterruptible power supplies (UPS): UPSes, however, will still be needed, not so much for standby power but for conditioning purposes. While Singapore’s power system is highly reliable, UPSes will be able to pick up very low anomalies, said Ansett.