Use of cloud computing by enterprise companies is still growing rapidly and this greater dependency means businesses must rethink the physical infrastructure equipment (power, cooling, networking) remaining on-premise, at the ‘edge’.

“Large or extra-large cloud data centres now house many of the critical applications for enterprise businesses that once resided in their on-premise data centres. However, not all applications have shifted to the cloud for various reasons including regulations, company culture, proprietary applications, latency – just to name a few,” Electric.As a result, businesses are left with what we refer to as a ‘hybrid data centre environment’ or a mix of centralised cloud data centres, regional medium to large data centres, and smaller on-premise data centres.”
– Derek Friend, Regional Executive, Head, Telco, C & SP at Schneider

What once might have been a 1MW data centre, at an enterprise branch location, may now consist of a couple of racks of IT equipment running critical applications and providing the network connectivity to the cloud. However, the decreased footprint and capacity of the on-premise data centre should not be equated to it being lower in criticality, for in many cases, what has been left on-premise becomes more important.

With more applications living in the cloud, the connectivity to the cloud is crucial for business operations to continue. There is also a growing culture of millennial employees that demand ‘always on’ technology and cannot tolerate downtime disruption.

Unfortunately, most edge data centres today are fraught with poor design practices, with little thought to redundancy or availability, leading to costly downtime.

THE COMMON PROBLEMS INCLUDE:

  • Lack of security – Rooms are often unsecured; racks are often open (no doors)
  • Unorganised racks – Cable management is an afterthought, causing cable clutter, obstructions to airflow within the racks and increased human error during adds/moves/changes. (See Figure 1)
  • No redundancy – Power (UPS, distribution) systems are often 1N, which decreases the availability and ability to keep systems up during maintenance.
  • No dedicated cooling – These small rooms and closets often rely on the building’s comfort cooling, which can lead to overheated equipment.
  • No Datacentre Infrastructure Management DCIM monitoring – These rooms are often left unmanaged, with no dedicated staff or software to manage the assets and ensure downtime is avoided.

“This suggests that a change in how we design these small on-premise data centres is needed. We can no longer only focus on central and regional datacentres, more focus should be on the localised sites because they are currently the weakest links. The typical design practices at the edge are inadequate given the mission-critical nature of these sites,” says Friend.

IMPROVEMENTS SHOULD FOCUS ON:

  • Physical security
  • Monitoring (DCIM), operational practices, remote monitoring
  • Redundant power and cooling
  • Dual network connectivity

“The Schneider Electric prefabricated Micro Data Center Xpress is a simple way to ensure a secure, highly available environment at the edge. Prefabricated, factory built and tested, they offer quickest deployment, shortest possible lead time and wide configuration options. They are self-contained, secure computing environments in single IT rack size or smaller that enable latency reduction.

“Best practices such as redundant Uninterruptible power supplies (UPS), a secure organised rack, proper cable management and airflow practices, remote monitoring, and dual network connectivity ensure that on-premise sites can achieve the operational success they require,” concludes Friend.