Out-compute to out-compete

by

4 December 2014
Jeffrey Davis of IO

High performance computing (HPC), sometimes referred to as “supercomputing,” is the use of aggregated computing power (for example, computing servers and clusters) to solve problems that are computationally intensive or data intensive. And it’s on the rise.  In one worldwide IDC study, 97 per cent of companies that had adopted high performance computing said they could no longer compete or survive without it.

The HPC market is growing fast because more companies need it (to leverage Big Data, for example), and they recognise that HPC capabilities can mean competitive advantage.

One of the first and heaviest users of high performance computing is the financial services industry. Beyond its use in high-frequency trading applications, HPC is used by financial services firms for risk modeling, fraud detection, bitcoin mining, derivatives trading, pricing, regulatory compliance, and Big Data applications like customer profiling – among other uses.

Beyond financial services, HPC is also leveraged for scientific and medical research applications such as computational fluid dynamics and genomic sequencing.

HPC applications can be run in high-density data centre environments that support server racks consuming 25kW of power (or more). The benefits of doing this, instead of running them on large numbers of low-density servers spread across a larger data centre space, including savings in real estate, greater efficiencies in cooling, significant capital and operational cost savings, and increased sustainability. A high density environment architected specifically for HPC can be a high-security, high-efficiency alternative to “tons of pizza-box servers”.

However, most legacy data centre environments can’t adequately support HPC because increasing density in a legacy data centre to run HPC applications can be extremely expensive, if feasible at all. Most legacy data centres are simply not set up to accommodate higher densities.

The answer to this is a prefabricated modular data centre that can suit high densities.

Consider the example of a financial services firm that needs to add more compute capacity in 30 days or less and push rack density to 25-30kW. Legacy data centre providers will probably not be able to support such high densities and meet the demand in less than a month. With a modular data centre colocation environment, however, the financial services firm will be able to increase rack density to its target and do it in less than 30 days.

Another significant advantage of a modular data centre built to suit high densities is energy efficiency and the resulting cost savings. In a year-long, side-by-side comparison of a traditional raised-floor data centre and a modular data centre, the modular data centre  was found to have power usage efficiency (PUE) of  1.41, significantly better than the 1.73 PUE of the traditional data centre. For a 1MW facility, the PUE difference yields a 19 per cent reduction in energy costs.

In  an  environment  in  which  HPC capabilities  are  a  matter  of  competitive advantage, the capacity of the data centre to support such applications is no longer “just” an IT issue. Where space is constrained, or retrofitting the legacy data centre is cost-prohibitive, a prefabricated modular data centre built to suit high density server racks can be just the environment the firm needs to out-compute, and out-compete.

  • Jeffrey Davis is senior vice president of market development at IO.