Greener pastures for the Singapore data centre industry

by

8 August 2014
Panel Discussion at Energy TechRoadmap

A 35 per cent energy efficiency improvement for IT systems, an 85 per cent improvement for supporting non-IT systems, and potentially the same gains from energy-efficient software – these 15-year goals for the Singapore data centre industry may sound ambitious, but they are inherently achievable, said Ed Ansett, Chairman of i3 Solutions Group.

Ansett was presenting the Singapore Green Data Centre Technology Roadmap at Energy TechRoadmap 2014, a symposium on energy and low carbon technologies organised by the National Research Foundation and National Climate Change Secretariat.

The data centre report, a collaborative effort between I3 Solutions Group , the Infocomm Development Authority and other Singapore government agencies, sets out a framework to improve data centre sustainability by reducing energy consumption and improving the energy efficiency of data centre facilities and IT systems.

Much of this effort is focused on cooling, which is generally considered the single largest energy overhead for the data centre. “The way we did cooling was to essentially use office technology and apply it to the data centre, and we’ve gotten to a point where we realise it is not working very well,” said Ansett.

Singapore’s climate is a major issue. “The energy efficiency issues associated with cooling in Singapore are exacerbated by year-round high temperatures and humidity levels,” said Ansett. “In order to overcome this climatic disadvantage, we require innovative hardware, software and facilities technologies to reduce energy consumption. “

In line with this, one of the goals of the Green Data Centre Technology Roadmap was to identify research topics and goals that would enable Singapore to achieve a dominant position in green IT systems.

Software power management

High on the list was the topic of software power management. Ansett estimated that for every 10W used by software, there is a cascading effect in terms of energy consumption, with an estimated 16W required to power the IT systems and about 32W required from the data centre facility. “This tells us we have been looking at the wrong place,” he said. “We can do a lot more with software.”

Putting a server in idle mode reduces its power usage by just 50 per cent, he said. “You think you have turned the lights off when you go out of the room but no, you have just turned it to 50 per cent. The only way you can significantly reduce energy consumption in computers is to put them into the sleep state.”

According to Ansett, even at the highest (or least energy efficient) sleep level, systems draw 2 per cent of power instead of the 50 per cent drawn in idle mode. And whilst there is a time penalty in bringing systems out of the sleep stage, a vast majority of applications do not require an immediate response in the region of microseconds. On the upside, every joule of energy saved by putting a system to sleep saves 3 joules in terms of inputs to the data centre. There is thus a need to address software energy efficiency by taking a closer look at how coding is done and how compilers provide options for energy conservation, he said.

Energy-aware workload power management

Another area that is going to be very important as data centres look to reduce energy consumption is energy-aware workload power management. This involves migrating applications within virtualised environments to make better use of server resources. Unused servers can then be put into sleep mode or even powered off. “The idea is that when you leave your house in the morning, you turn the lights off and shut the door,” said Ansett.

Developments in energy-aware workload managment will also pave the way for the evolution of data centre infrastructure management (DCIM). DCIM is a management platform for data centre reporting. The technology is in its infancy and is still, in most part, a management reporting and planning tool, said Ansett. However, with energy-aware workload allocation and dynamic provisioning, not just of IT resources but also of cooling, DCIM could evolve to automate and actively control data centre power, cooling and IT systems.

There are, however, several challenges related to energy-aware workload management besides trade-offs in performance and reliability. Yong Khai Leong, division manager of Data Centre Technologies at A*STAR’s Data Storage Institute, highlighted the irony that the same virtualisation capability which makes energy-aware workload management possible also makes it challenging to implement.

With many different virtual machines sitting on a single physical machine, it is much more difficult to create a situation where machines are able to idle, he pointed out. Opportunities to idle also come about because of over-provisioning, but with virtualisation paving the way for dynamic provisioning of IT and architectures that are able to scale on demand, there will be less excess capacity that permits idling.

Air and cooling management

In the facilities space, air and cooling management is “the single largest near and medium term opportunity to improve data centre energy efficiency” due to the continued popularity of air as a cooling medium, said the green data centre report.

For example, dynamic cooling using artificial intelligence-based control algorithms can optimise real- time cooling across the data centre based on server inlet temperature. The algorithms determine how each cooling unit affects data centre inlet temperate. The system then modulates individual variable speed drives at each cooling unit. As facility or IT changes occur, the control algorithm adjusts the cooling system.

The advantage of air and cooling management is that it can be introduced relatively easily into existing data centres, and energy savings could be quite substantial, said Ansett. He estimated that dynamic cooling using internal passive cooling (for example, the use of heat pipes to reduce hotspots on chips or servers) can reduce server power consumption by up to 30 per cent and data centre cooling demand by up to 40 per cent, significantly reducing data centre operating costs.

Close-coupled refrigerant cooling and liquid cooling

Two other areas related to cooling are close-coupled refrigerant cooling and liquid cooling. “Any form of liquid cooling presents a significant opportunity to improve energy consumption in the data centre by addressing cooling issues at source or close to the heat load,” said Ansett.

With liquid cooling, the coolant passes over all surfaces in the IT equipment (including hermetically- sealed hard disk drives) and transfers the heat to an external heat exchange.

He noted that water-based cooling systems have been around for long time, but they have not gained real traction because of the risks of interaction with electrical components. Going forward, the use of inert fluids will deliver the advantages of being non-conductive while significantly reducing energy consumption.

According to Peter Gross, vice-president of mission critical systems at Bloom Energy, there are at least 15 different Bitcoin data centres around the world that are using liquid or immersed cooling.

More specifically, direct-to-chip liquid cooling will help address glaring inefficiencies in data centre cooling today, given that 80 per cent of the heat generated by IT systems are coming from the processor. “It’s like having a cup of coffee by heating up an entire room,” he said. “For cooling a little chip on the server, you cool down the entire facility. Direct-to-chip cooling will thus have a great impact in reducing the energy required to cool IT systems.

Free cooling (hardening) in Singapore

Free cooling (hardening) is about improving the robustness of IT equipment so that the data centre is able to exploit free cooling techniques, for example, using outside air.

Ashrae (American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommends a maximum of 27 deg C and 60 per cent humidity for data centre operations.

Ansett pointed out that from a climactic point of view, Singapore is rarely in the zone where the use of outside air is feasible. “Maybe for two days a year we may be statistically in this zone, so it is worth forgetting about it.”

At a more fundamental level, there is a need to examine whether the Ashrae guidelines reflect the true operating capabilities of IT systems today.

Lim Wei Wah, director of Asia IT Site Operations with Microsoft Operations, noted that hardware vendors such as Dell and HP allow their systems to operate at up to 35 deg C. The Microsoft facility has demonstrated that it is possible to operate at 29.4 deg C while keeping MTBF (mean time between failure) intact, and he believed “we have not pushed to the limit yet”.

There is a lot of anecdotal evidence to suggest that systems can operate at 35 deg C, said Ansett. However, more research needs to be done to convince data centre operators here that this is possible. “We need to know what real impact there is on MTBF for IT equipment if they are to operate at higher temperatures. This will have a huge impact on data centre energy consumption in Singapore.”

By identifying key areas of research and highlighting the technologies that are likely to have an impact on data centre energy consumption, the Green Data Centre Technology Roadmap aims to encourage people from research community and industry leaders to come together and develop some of these technologies in the Singapore context. Proofs-of-concept will have to be carried out to demonstrate what these technologies can do and convince people that they can be applied safely in Singapore. “We will have to deal with the knowns, the technologies that we know about right now, and figure out what are the best bets in terms of their impact, cost to implement and carbon emission,” said Ansett.