Delivering visibility "from chip to chiller"

by

24 July 2014
Patrick Flynn of IO

Data centre company IO adds about 145 million rows of data to its 7TB database each day, and it is making use of this information to provide customers with “chip to chiller” visibility and build data models to get as close to capturing end-to-end performance as possible.  

Founded in 2007, IO operates four data centre facilities in the United States and one in Singapore, with a sixth slated to open in the United Kingdom later this year. While the company still operates traditional data centres, it is making a strong push for what it describes as version 2.0 modular data centre technology. This comprises standardised, modular, factory-produced units which include power and data modules (IO.Anywhere), as well as an intelligence control platform in the form of its data centre operating system (IO.OS). 

IO.OS is DCIM (data centre infrastructure management) plus the ability to optimise and control infrastructure, explained Patrick Flynn, group leader of Applied Intelligence & Sustainability at IO. The OS works with the standardised data centre modules to collect real-time operating data from a vast array of sources such as communication protocols, IT systems, sensors and building management systems.

A year ago, IO set up its Applied Intelligence Group for one overarching purpose – and that was to find and extract value from this data using a variety of techniques from data mining and modelling to machine-learning and simulation.

Using the data, IO provides customers with an analytics portal that gives them a single pane of glass to their data centre environment across the globe. For example, an energy dashboard tells customers how much energy they have consumed down to the individual data centre module, how much carbon emissions they have generated, and what potential savings are to be had through more energy-efficient practices.

The dashboard is customer-specific and built off real-time PUE calculations, said Flynn. It also allows customers to drill down into the data if the analytics throws up unexpected results and they want to find out more.

“Customers can go from chip to chiller in a matter of clicks,” said Flynn.  For example, the dashboard measures power down to the branch circuit level and captures not only IT energy consumption but also input energy consumption, providing customers with the hooks to be smarter in their data centre usage.

IO is also working towards building a set of metrics that can get as close to capturing end-to-end performance as possible, said Flynn.  “Standard metrics such as PUE and CUE only capture one part of the picture,” he explained.” The ‘supply chain’ is what we need to optimise.”

To demonstrate this, the Applied Intelligence team has built a model that provides data on dollars per VM-hour on the cloud. “This includes PUE, but it is much more meaningful because it gets us closer to knowing what it truly costs on one side, in order to get value on the other, so customers can take this information and understand what each VM-hour yields in terms of profit.

IO is also working with McLaren Applied Technologies to build a model that uses machine-learning algorithms to project energy usage for customers. The challenge in doing this is that IO does not have direct visibility into what its customers do with their data centre modules. “We only have indirect indicators, for example, the customers’ power signature, but no real insight into what they are doing and why, so we need to build tools that can do this.”

The insights from these initiatives dovetail with the second value bucket that IO’s applied intelligence initiatives are building towards, which is to have more streamlined operations.

 They enable IO, its clients and end-customers to understand what “normal” is and put in place measures to be more efficient. “For example, as we benchmark energy efficiency, if we detect outliers on the ‘good’ side, we look at how we can replicate them, and for those that are not so good, we look at how we can fix them. This helps us to build better models for energy efficiency in the data centre,” said Flynn.

Based on the analysis, customers can make decisions on, say, air flow management, server density or re-allocation of IT workloads, or change set points for the operating temperature. In one example, IO found that a cloud module in its Phoenix data centre was not operating as efficiently as expected. It turned out that the air flow containment was not as good as it could be, and so the customer was advised to put in blanking panels or containment devices on racks that were not occupied by IT in order to improve energy performance.

This, however, also highlighted one of the challenges that IO faces in its energy efficiency drive – the company itself is typically not in direct control of driving these gains. “Our customers use technology in different ways, so we tend to get performance that is all over the place.”

In an attempt to tackle this, IO is adopting an information-plus-incentives approach. The information would be in the form of the energy reports, energy efficiency benchmarks, cost analysis, carbon analysis and the like. And the incentives would be in the form of real-time billing to drive customers to change their usage behaviour. “We give them real-time PUE, real-time utility rates, real-time energy consumption and real-time costs, and show them how this could change if they were more efficient in their operations.”

A third area where data analytics is adding value to IO is product development. “Through the data, we see how people are using our data centre modules in the real world. We take the information and feed that back to the engineering team so that they can build a better product.”

Simulation models are being created to test the capabilities of IO’s data centre modules for conditions that would be difficult to replicate in the real world. This enables IO to understand the limits of a product safely, without bearing the economic consequences, said Flynn.

IO is also working towards building more streamlined hardware for the data centre. For example, it is developing the next generation of edge products that will handle resiliency with the use of intelligence to achieve the same level of uptime, instead of building two - or more - of everything all the time. “We are aiming for more smarts, less parts,” said Flynn.