Up next: Software-defined diesel generators?

by

17 September 2015
Sanjay Sainani of Huawei

While industry pundits expound the virtues of a software-defined data centre, the fact remains that the management layers in a data centre remain separate - data centre infrastructure management (DCIM) at the facility level, network management software at the network level, and the cloud operating system handling service orchestration and auto-provisioning.

“Is there a way we can connect these three? When the cloud OS is doing orchestration or auto-provisioning, can that auto-provisioning command be understood and extended to which rack should be loaded up, because loading that rack will increase the efficiency of that data centre?”

This is what exactly what Huawei is working towards, said Sanjay Sainani, vice president, Data Centre Solutions, in his presentation on the 4.0 Revolution in Data Centre Design, at Data Centre World 2015

Highlighting some of the challenges confronting the data centre industry, he recounted how Wechat took on 100 million users in its first 100 days of business, and was adding another 100 million every 100 days. “With that kind of a business model, how do you plan your compute resources, plan your networks? All these sit in a data centre, so how do you plan your data centre?”

The data centre business is a very high capex business, with significant dollar expenditure required for buildings, utilities and cooling. It is also traditionally very inflexible. “If you built a Tier 2 but want it to be a Tier 3, it is not easy to get the power you need from the utility company,” he said.

And power is the gorilla in the room. Some big collocation facilities easily draw over 30MW of power. “That is a tremendous amount of power being consumed, irrespective of how efficient the data centre is,” said Sainani.

The data centre environment is also highly complex, and highly inefficient in the way it is currently being managed. He cited the example of a customer with 10,000 racks and 30,000 different devices, with a 40-member team comprising facility experts, electrical experts, a compute team, a network team and an application team to manage the environment 24x7.

The way forward, said Sainani, is the intelligent data centre where all areas can communicate with each other. For example, in such a data centre, all the devices are connected and big data analytics is applied to understand how IT equipment will behave. The next step would be to push this intelligence to the facilities level, to spin up power and cooling based on the requirements of the service level agreement. For example, certain applications are mission critical and have to be run on a Tier 3 environment, while other applications could be run on Tier 2.

“Currently, the auto-provisioning does not manage that at the facilities level, only at the IT and compute level. If we can link up the cloud operating system to the infrastructure operating system to the DCIM, and then the question is - can you take it further down to the chiller and the diesel generator?”

And this is where open standards come into play, said Sainani. “Unless we have openness, we cannot leverage the capabilities that exist in different layers.”

At Huawei, open source management software is being adopted across the board, and its engineers are working to link everything together – the cloud orchestration layer, the network management layer and the DCIM layer, said Sainani. The aim is to build intelligence across the different layers to achieve more efficient data centre management.

For example, in the area of energy consumption, heavy IT workloads could be moved to particular racks and more power and cooling targeted at those racks. According to Sainani, this was found to result in a 8-9 per cent improvement in energy consumption for a data centre with a power usage effectiveness (PUE) of about 1.6.

Taking this further to the world of logical data centres, where a single logical data centre could span multiple physical facilities, it would be possible to connect the management layers of all the facilities together, and look at PUE across the entire ecosystem. This will be useful for customers who are interested in knowing how much energy a particular workload is consuming, and not just how much energy a particular data centre is consuming.

Sainani estimated that the payback for creating these linkages is less than three years. In a 5MW data centre, for example, the energy savings could add up to about $750,000 a year, he said.

Huawei is also starting to link more devices on the cooling side, such as chiller pumps, and also generators for reliability gains.

“These linkages at the zero level is happening through DCIM commanded from the cloud OS. That is where the commands are driven from.”