It seems that the “hype” is starting to fade a bit, but I still hear the word AI far too often in relation to Data Center infrastructure operations, and it’s logical, because even the “big bosses”, who are not involved in the day-to-day of the Data Center, regularly ask: does our DCIM system, BMS, or any other system have AI?
And this reminds me of when we talk about integrations between systems, the answer is another question. Yes, it can be done, but what is it going to bring you?.
I think that sometimes we once again confuse a tool that will be part of a journey with the objective itself.
¿Cuál es el objetivo en las operaciones del Data Center? Siempre está bien tenerlas a mano:
Minimizar riesgos: resiliencia
Reducir costes: eficiencia.
Reducir tiempos en despliegue de servicios: estrategia.
Por eso, por mucho que hablemos de automatización, de inteligencia artificial, o incluso de Data Centers autónomos que se gestionan solos, en el día a día, la operación sigue siendo reactiva, llena de alertas, decisiones urgentes y demasiadas variables que dependen de la experiencia (o del cansancio) de las personas.
Quizá el problema no esté en la tecnología. Quizá el problema esté en que hemos empezado el viaje por el final, buscamos soluciones fuera de los problemas que solo podemos trabajar desde dentro, los procesos. Y es que siempre es más fácil comprar la idea de que hay algo plug and play que me resolverá todos los problemas a enfrentarse a personas, departamentos, silos de gestión, etc.. porque enfrentarse no nos gusta a nadie.
¿Eso significa que no hay camino para la automatización? ¿qué no hay tecnología que nos pueda ayudar? Ni mucho menos, lo que significa es que hay que poner cada tecnología o capacidad en el momento en el que estés preparado para ello. Imagínate que mañana te compran un robot para las operaciones del Data Center, te traen el paquete con las instrucciones y te dicen alá ahí lo tienes… y entonces .. por donde empezarías.
En Bjumper llevamos ya mucho tiempo haciéndonos estas preguntas, y no tenemos todas las respuestas, pero hemos elegido un camino, desde abajo, con paciencia ( que parece que últimamente ha dejado de ser la madre de todas las ciencias).
Según nuestra perspectiva, realmente la automatización no empieza con IA. Empieza con algo mucho más básico (y mucho más difícil): inputs claros.
Cuando pensamos en inputs, casi siempre pensamos en datos técnicos: temperaturas, consumos, estados de equipos. Pero un Data Center recibe muchos más inputs de los que solemos admitir:
- Incidents
- Changes
- Projects
- Maintenance
- Approvals
- Human decisions
- Process exceptions
Everything that enters the Data Center and has an impact on operations is an input, even if today we do not treat it as such. The problem is that these inputs are usually:
- Scattered across multiple tools.
- Without a common structure.
- Without operational context.
- Duplicated or, even worse, contradictory.
And this is the first point to tackle because when the starting point is confusing, the rest of the journey will be as well.
If we do not know exactly what enters the Data Center, the “in” of the input, we cannot trust anything that comes out.
Alright, now we have clear inputs, normalized because the processes are defined and followed, and we think that the outputs will be alerts, dashboards or reports. Are they? From my point of view those are not outputs, they are data visualizations.
An output must be operational and valuable and must answer very specific questions:
¿Qué está pasando?
¿Por qué importa?
¿Qué se espera que hagamos ahora?
An alert without context does not help. A dashboard without interpretation does not decide. A report without action does not change anything. Because a good output does not inform, it guides.
Therefore, there is more work to do, which is to identify valid outputs, understand which inputs we need for them and include the technology that is required (which may or may not be AI) to reach them and automate it, this is the route so that automation strategies do not remain halfway.
When we have already done this work, outputs change in nature, they stop being noise and start being:
- Clear recommendations.
- Warnings with real impact.
- Confirmations that everything is under control (this is also an output and the best of them!).
- Prioritized actions based on risk and timing.
It is not about having more information, but about having the right information at the right time.
Here something key for any future evolution appears, and the most complicated point from my perspective: trust.
Trust that the output makes sense.
Trust that no information is missing.
Trust that the proposed decision is coherent with operational reality.
As I have said on other occasions, this will only be achieved over time, with a transition that allows you to have control and security over the ordered input/data, and again this can only be reached with clear processes that are followed. The moment someone makes any change within the Data Center bypassing the processes… they will ruin the inputs on which automation is built.
Well, after having clarified the difference between input and output and how to get there, we were missing something else, and since in the world of digital development and product there is so much knowledge and they are so so organized, we came across the concept of outcome, and what is the “outcome”? Well, what really matters to the business.
The outcome is what truly changes in the operation and what responds to the objectives of the Data Center.
- Fewer human errors.
- Less improvisation.
- Faster and more coherent decisions.
- More predictable operations.
- Teams that can focus on improving the service, not on putting out fires.
Outcomes are not measured in managed alerts, but in real and sustained improvements. The value is not in automating tasks, but in improving results with respect to the 3 objectives we mentioned at the beginning: minimize risks (resilience), reduce costs (efficiency), reduce time in service deployment (strategy).
In this part of course AI can help us a lottttt, but first we had to walk a path. Can you jump ahead? Of course, always, will you get frustrated? Probably. If we do not know exactly what enters the Data Center, the “in” of the input, we cannot trust anything that comes out.
Automation: the last step, not the first
If there is something I believe we all agree on, it is that automation is necessary in the operation of such a critical environment as a Data Center, however this is the last step, not the first, because:
Automating without clear inputs is automating chaos.
Automating unreliable outputs is accelerating error.
Automating decisions that we do not yet understand is losing control and therefore trust.
Well-designed automation follows a much more logical path:
- Clear and structured inputs.
- Understandable and reliable outputs.
- Measured and repeatable outcomes.
- Progressive automation.
- Autonomy based on trust.
It is not a leap, and there is no rush, it is a gradual process, where technology earns the trust of operations step by step.
Final reflection
The autonomous Data Center will not arrive because of fashion or market pressure, it will not arrive by putting AI at the center, far from it. It will arrive when we trust what comes in, understand what goes out and know how to measure what truly improves.
To give you an example, a couple of weeks ago, our development and product team had a hackathon with the challenge of using AI in the functionalities of binOra, which will be the new star of the Data Center 😎.
Interestingly, from the 2 groups that were formed the approaches were different:
In chatbot format for guidance and consultation of information, creation of graphs or tables, comparing data, requesting improvements. It is the person who commands.
In CTA (call to action) format enhanced by AI, that is, based on our knowledge or on market best practices, we already include recommendations for improvements, optimize tasks, etc.
It was great work to create a foundation on which we are working, and beyond the “wow” effect that it may imply what we are clear about is one point: There is no AI if there is no organized data, and there is no organized data if we do not focus on processes.
This is not a BIM, or an as built which are a photograph at a given moment, operating a DC constantly generates changes in the data and new data, and without well-organized processes that data will never be properly collected.
Can we include AI in the product? Of course, should we lay the foundations first? Absolutely, if we move ahead too soon, we will lose trust.