Quality service, energy consumption, operating costs… optimising IT room layouts is a truly multidisciplinary art form, especially when it comes to densification and our exponentially growing needs. To achieve the best possible returns in the long run, it also becomes a question of anticipating our future needs, both in terms of technology and business.
Data center restructuring: the art of covering every base
In a very short space of time, data centers have seen exceptional densification thanks to increasingly efficient IT infrastructure. Today, just six bays can provide the same capacity as an entire 400m² IT room would have ten years ago! At the same time, IDC forecasts that the ‘global datasphere’ will reach 163 zettabytes (163 thousand billion gigabytes) by 2025, that’s ten times more than 2016 and its 16.1 ZB.
In order to deal with such rapidly growing storage needs, data center managers are having to come up with new approaches to continuously optimize IT rooms and their associated performance. This requires expertise on all the elements that contribute to the proper running of data centers, including information systems architecture (different types of hosted servers, network cables, etc.), data center infrastructure (capacity, electricity supplies, cooling systems, high-density cable systems) and buildings themselves (space, safety, security…). The fine modeling of these elements, using thermodynamic modelling, makes it possible to optimise restructuring before the construction work on a new data center or the transformation of an existing site.
Rigorous data center operations
Considering restructuring before starting production on a new data center is vital, but it’s still not enough. In the absence of fully operating an IT room and all its assets to the maximum, any yield will quickly deteriorate. This involves using a DCIM (Data Center Infrastructure Management) solution to monitor and manage all these resources, formalizing and applying documented procedures, as well as defining key performance indicators (KPIs) to be followed during the build and run phases and to run the data center in the short-, medium- and long-term (availability rates, capacity planning, energy needed to operate equipment, PUE, etc.). It should be noted that DCIM tools are constantly evolving, so that continuous training is necessary to exploit their full potential.
Foresight: the cornerstone of a productive data center
Beyond the endogenous elements of the data center, a technological watch is essential: it is by perfectly knowing the possibilities offered by the market that it is possible to construct and maintain state-of-the-art data centers in line with technological evolutions. Immersion cooling, quantum computing, artificial intelligence, wireless indoor connectivity, edge computing and even hyperconverged hybrid structures are all evolutions to be evaluated and integrated in a data center restructuring project in 2018, to optimize its performance, its lifespan, and consequently, its overall operating costs. Network infrastructure connectivity, via fibre and 5G in particular, will play an crucial role in the coming years to increase data rates and reduce latency.
However, and despite a number of best practices, there’s no miracle recipe for data center operation and restructuring. Customization is required, as every company and every business is unique. As a result, operating procedures and restructuring solutions are specific to each site. With the continuation of Moore’s law and the continuous development of R&D, particularly in terms of energy performance, there’s no doubt that the scope of data center restructuring will continue to evolve over the coming decades.
By Frédéric Perrigault, consulting and IT engineering department Director, APL