Today’s data centers face a challenge that initially seems almost impossible to solve. While operations have never been busier, teams are being pressured to reduce energy consumption at their facilities as part of the company’s goals to reduce carbon emissions. And if that wasn’t difficult enough, dramatically rising electricity prices are putting pressure on data center budgets.
With data centers focused on supporting the “essential” technology services people increasingly need to support their personal and professional lives, it’s no surprise that data center operations have never been busier. Driven by trends that show no signs of slowing down, we are seeing massive increases in data usage related to video, storage, computing requirements, smart IoT Integrationslike 5G connectivity unroll. However, despite this escalating workload, the unfortunate reality is that many of today’s critical facilities simply aren’t working efficiently enough.
Given that the average data center has been in operation for over 20 years, this shouldn’t come as much of a surprise. Efficiency is always linked to the original design of a facility – and based on expected IT loads that are long overtaken. At the same time, change is a constant, with platforms, equipment design, topologies, power density requirements and cooling requirements all evolving with the constant push for new applications. The result is a global data center infrastructure that regularly finds it difficult to align current and planned IT loads with their critical infrastructure. This will only worsen as data center demand grows, with analyst forecasts suggesting workload volumes will continue to grow at around 20% per year between now and 2025.
Traditional data center approaches struggle to meet these increasing demands. Prioritizing availability is largely at the expense of efficiency, with too much reliance on operator experience and confidence that assumptions are correct. Unfortunately, the evidence suggests that this model is no longer realistic. EkkoSense research shows that on average 15% of IT racks in data centers operate outside ASHRAE temperature and humidity guidelines, with customers losing up to 60% of their cooling capacity due to inefficiency. And that’s a problem, as Uptime Institute estimates that the global value attributed to inefficient cooling and airflow management is about $18 billion. That is equivalent to about 150 billion kilowatt hours wasted.
With 35% of the energy used in a data center to support cooling infrastructure, it’s clear that traditional performance optimization approaches miss a huge opportunity to unlock efficiencies. EkkoSense data indicates that a third of unplanned data center outages are caused by thermal issues. Finding another way to manage this problem can provide operations teams with a great way to ensure both availability and efficiencies.
Limitations of traditional monitoring
Unfortunately, only about 5% of M&E teams currently monitor and report the temperature of their data center equipment per rack. In addition, DCIM and traditional monitoring solutions can provide trend data and be set up to provide alerts when breaches occur, but that’s where they stop. They lack the analytics to go deeper into the root of the problems and how to fix them and avoid them in the future.
Operations teams recognize that this kind of traditional monitoring has its limitations, but they also know that they simply don’t have the resources and time to take the data they have and turn it from background noise into meaningful actions. The good news is that technology solutions are now available to help data centers address this problem.
It’s time for data centers to get more granular with machine learning and AI
The application of machine learning and AI creates a new paradigm for approaching data center operations. Instead of being overwhelmed by too much performance data, operations teams can now take advantage of machine learning to collect data at a much more granular level, meaning they can access how their data center is performing in real time. The key is to make this accessible, and using smart 3D visualizations is a great way to make it easy for data center teams to interpret performance data at a deeper level: for example, by showing changes and highlighting anomalies.
The next stage is applying machine learning and AI analytics to provide actionable insights. By extending measured data sets with machine learning algorithms, data center teams can immediately take advantage of easy-to-understand insights to support their real-time optimization decisions. The combination of real-time granular data collection every five minutes and AI/machine learning analytics allows operations to not only see what is happening in their critical facilities, but also discover why – and exactly what to do about it.
AI and machine learning-driven analytics can also uncover the insights needed to recommend actionable changes in key areas such as optimal setpoints, floor grid layouts, cooling unit operation, and fan speed adjustments. Thermal analysis will also indicate the optimal rack locations. And because AI enables real-time visualizations, data center teams can quickly get instant performance feedback on any changes made.
Data center operations help to make an immediate difference
Given the pressure to reduce carbon consumption and minimize the impact of electricity price increases, data center teams need new levels of optimization support to meet their reliability and efficiency goals.
Taking advantage of the latest approaches to machine learning and AI-powered data center optimization can certainly make a difference by reducing cooling energy and usage – with results achievable within weeks. By putting granular data at the heart of their optimization plans, data center teams have already been able to not only eliminate thermal and power risks, but also reduce cooling energy consumption costs and CO2 emissions by an average of 30%. It’s hard to ignore the impact these kinds of savings can have, especially during a period of rapid increases in electricity prices. The days of handling risk and availability for optimization are a thing of the past with the power of AI and machine learning at the forefront of managing your data center.
Want to know more? Register for Wednesday’s AFCOM webinar on this topic here†
About the author
Tracy Collins is vice president of EkkoSense Americas, the company that enables true M&E capacity planning for power, cooling and space. Before that, he was CEO of Simple Helix, a leading AL-based Tier III data center operator.
Tracy has more than 25 years of deep experience in the data center industry, previously serving as Vice President of IT Solutions for Vertiv and prior to Emerson Network Power. In his role, Tracy is committed to challenging traditional approaches to data center management, particularly in terms of solving the optimization challenge of balancing the increased data center workloads while meeting the company’s energy efficiency goals.