Planning your move to in-memory computing

by

1 September 2015
Tan Lee Chew of HP

In-memory computing is rapidly changing the way businesses process information. It uses in-line memory instead of disks for storing data, which means fewer CPU instructions and quicker seek-times during the data query process on complex and often massive data sets. The result is much faster performance. What once took hours or days to process now takes minutes to seconds. Business leads can make faster decisions based on real-time information and IT can quickly deliver analytics and insights without impacting their current day to day operations.  Database managers spend less time moving and managing data and more time analyzing it.

SAP HANA, for example, takes advantage of in-memory computing technology to enable real-time data access. It also provides a common platform across the entire SAP landscape within an enterprise to simplify how data is stored, managed and analysed.

More likely than not, however, your environments are crammed with legacy databases that are complicated and disconnected. And it takes more than in-memory data management software to achieve your business objectives.

It is critical that your infrastructure correctly matches the in-memory data management software you select. That is why more IT executives and managers are making long-term, strategic architectural bets for their data centres and data management platforms. You need to develop the right strategy, delivery model and partnerships to minimise risk and disruption to data centre operations during deployment as well as to scale as your data grows while maintaining high performance.

Let us look at some of the requirements that should be part of your strategic migration plan.

Levels of service

As you move to more advanced workloads, you have more advanced mission-critical needs that require the capabilities found on high-end UNIX systems---fault-tolerant architecture, very high I/O bandwidths, self-healing analytics and automation. Make sure these capabilities are built into your system to keep downtime at a minimum and disaster recovery as fast as possible. For example, hard partitioning technology increases system reliability and agility significantly. Each hard partition has its own independent CPUs, memory and I/O resources as part of the blades that make up the partition. These are tied together through a fault tolerant crossbar. So a fault in one part of the system has no impact on the rest of the system.  In terms of agility, these partitions enable you to run CRM, ERP and BW solutions on a single system – a significant step forward in realising the “real time” enterprise.

Even with mission-critical capabilities built in, it is wise to augment your system with end-to-end services that enable speedy deployment, smooth migration, and business continuity. Data management requires 24/7 support so you can proactively prevent issues, maximise system performance, and accelerate problem resolution.

Performance and scale

To get the most from your in-memory data management environment, you need application-optimized systems that are purpose-built to meet your in-memory computing needs of high performance and high availability. These systems should have scalable architecture design to allow easy scalability to protect your current investment and expand as needed into the future.  The industry standard is scalable up to 6 TB, but if you are looking to really use SAP HANA, for instance, for your largest business application environments, a ceiling of 4 or 6 TB in-memory computing doesn’t address your needs. It is not uncommon for large systems to run on databases that are 20 or 30 TB. Make sure you have adequate scalability to handle data warehouse environments as you move along your big data journey.

And remember, not all memory is created equal. Simply jamming memory into the hardware depletes workable memory. Make sure systems have high level chips that can accommodate higher RAM so you are productively using the increased memory size and maintaining a high level of performance.

So don’t postpone the move to an in-memory data management platform, but spend time in upfront planning. Think strategically about performance, scale, levels of service, mission-critical capabilities, and how to make the most of and get the most out of your in-memory computing investment.  

  • Tan Lee Chew is vice president and general manager, HP Servers, Enterprise Group, Hewlett-Packard Asia Pacific and Japan.