Do you know how the software that manages your infrastructure was developed?

When you go to the supermarket and choose a product, the packaging tells you where it comes from. Country of origin, production conditions, whether it is organic or not. You don’t see the process, but you have enough information to decide whether you trust it. 

Now think about the software you acquire to operate your Data Center: the DCIM system that manages your physical inventory; the BMS that controls cooling, access and security; the monitoring tools your operation depends on every day.

Do you know how it was developed? What QA process is behind it? Did someone validate it under adverse conditions before it reached you?

Probably not. And until recently, it didn’t matter much to ask. What existed was a relationship of trust, or at least we did not question how the development process had been in our sector. 

However, something is changing. Software development with AI and agents is already a reality in many teams. My colleague Sergio analyzed this very thoughtfully a few days ago: the promise of building applications in hours, that anyone can generate functional code without writing a single line manually, that the role of the developer is transformed almost overnight. Lately, concepts and strange words that used to belong to the development and digital product environment have entered our everyday vocabulary, and I’m not sure whether that is good or bad. 

The use of agents and AI in the creation of digital products makes sense in many contexts. In cases closer to testing, where speed has value, in academic environments where learning is the foundation. But what about production applications? And beyond that, what about production applications in mission-critical environments such as communications, energy transport, air navigation, or of course Data Centers? 

There is a consequence of this trend that almost nobody is mentioning: within two or three years, part of the enterprise software reaching the market will have been developed with processes radically different from those of the past. Faster, more automated, with smaller teams, with lighter testing. And you, as a buyer, will have no way of knowing by looking at the box. However, they will be present in environments where software makes decisions with real impact. 

In B2B we usually buy software as if we were buying a finished product. We evaluate it by what it does, not by how it was built. And for decades that has worked reasonably well because development processes had natural frictions that acted as a filter: teams were large, cycles were long, reviews existed even if they were imperfect—because nothing is perfect—but there was strong control. Security mattered more than speed. 

And I wonder, how much longer will security matter more than speed?  

So then, what do we do?

I don’t have a definitive answer. But I do know the kind of questions that should be answerable before signing a software contract for a critical environment:

  What percentage of the code has been reviewed by people with domain knowledge, not         only knowledge of the programming language? 

 How has the behavior been tested under failure conditions, not only along the happy               path? 

 Is there traceability of the development process: who decided what, when and why? Is           the code documented? 

 What happens when something fails in production and how can it be intervened? 

I am not a developer, not even close, and there are probably many other questions that are much more interesting to ask. I don’t know if these are the right ones or which they should be, but I do know that we need to start thinking about them.

Traceability as a voluntary differentiator

I am not always in favor of regulations, but I am in favor of transparency. It does not necessarily have to come from a regulator (although the European AI Act is already moving in that direction for certain high-risk systems). I believe the most interesting opportunity is for vendors themselves: those who voluntarily place their quality systems in the hands of the user. Without that, I am not so convinced that we will take a leap in the automation of processes as critical as those that happen in a Data Center. 

In a market where everyone will be able to say their software “has AI” and was developed “with the latest technologies,” transparency about the process can become an argument to build trust. Not “what our product does,” but “this is how we built it and this is how we validated it.” 

It is the same thing that happened with food traceability. No one demanded it. Then there was a crisis, the market demanded it, and those who already had the process documented came out ahead. Those who didn’t had to run. 

The question I ask myself is whether the enterprise software sector will wait for a crisis to start thinking about this, or whether someone will move first. 

Final reflection

I don’t know what this should be called. A label? A process declaration? A development audit? It is probably not a single thing, but a combination that depends on the sector and the level of criticality.

What I do know is that buying software for critical environments without knowing anything about the process behind it will increasingly become a risk that someone will have to assume explicitly. And that someone is usually not the vendor. It is the one who signs the purchase. 

So the question I leave you with is simple: would you ask your software provider how what they are selling you was developed? And if the answer were “with AI agents in three weeks,” would you still sign? 

Because before answering, I would need to know quite a bit more. 


Tokens: the real “currency” of AI. How to understand what’s behind it when you use or pay for an artificial intelligence tool