The Importance of DCIM in the EDGE

If we go back to the year 1999, it may seem like a long time ago, but it's only been 20 years. In that year, a company like Google set up its first rack with two Dual Pentium II Servers at 300 MHz, 512 MB of RAM, 9 hard drives of 9 GB, a SUN Ultra II workstation with dual processors at 200 MHz, and 256 MB of RAM, along with 3 hard drives of 9 GB and 6 hard drives of 4 GB.

We wanted to describe a part of Google's first rack for the nostalgic, though I go even further back and still remember my 48k Spectrum ZX. Well, in the past 20 years, many things have happened. Among other things, Google now has more than 20 data centers (nobody knows how many servers), and in 2019, the energy consumption of their data centers is 100% renewable, as stated on their own website.

They are constantly aware of how technology is evolving, how it will be powered in the near future, and where most of the computing will take place.

The sector, following the footsteps of the major technology companies and drawing from their experiences, has been establishing decentralized data centers (regional or edge) with capacities ranging from 1-2 MW down to as little as 10-20 kW, or even smaller but equally important ones with up to 10 kW of IT load.

If we've seen what has happened in the last 20 years, what won't happen in the next 10 years at this technological pace? Every day, we realize, for example, that people's tolerance for service interruptions is decreasing while the demand for services is increasing. Users will switch from one company to another as soon as they realize they're losing service capacity and they'll seek out better providers.

That's why it's increasingly necessary to perform computing as close as possible to operations in order to meet the demands of new technologies and services that society is requiring more and more each day.

Until recently, we could see small "data centers," for example, in large retail stores, bank offices, or in sectors like retail, healthcare, etc., with low redundancy or availability, with racks (open, non-secure, disorganized, without proper airflow, etc.)  

In many cases, these infrastructures are or were managed by external personnel, and when there is an outage, which tends to be common, they have to call a technician who needs to travel to the site. This causes significant losses every year, both for the end customer who loses more clients due to the service interruptions and for the company that manages it, as their management is inefficient with high operating costs.

Today, we are seeing an improvement in these infrastructures with more advanced designs in their construction. This is largely because people, especially newer generations, have less tolerance for infrastructure failures, and the service is becoming increasingly important for all of us.

This is why we are encountering more prefabricated data centers, where deployment is easier and faster, and where all systems related to power, IT, cooling, security, fire protection, and network connectivity are controlled.

However, many of these solutions still face a serious problem in anticipating possible failures, with high maintenance costs due to the need to send personnel for unplanned maintenance tasks or extended response times for unexpected system outages.

We are observing that these fantastic decentralized data centers, if they do not have an initial DCIM tool that manages the entire infrastructure as 100% critical and fully integrates it with the rest of the potential sites, operation centers, systems, and other facilities, they become an infrastructure that is not fully utilized at the core of the business, and the return on investment will be much higher.


With a DCIM tool at the Edge, the infrastructure manager can have real-time control over the infrastructure, access data and its traceability at any moment (including historical data). This tool enables the analysis of asset management, scheduled maintenance, proactive actions to assess operational and economic costs before they occur, monitoring of energy costs, evaluating server capacity, and controlling cooling, among other functions.

While not everyone can afford to have all the resources of Google, having a DCIM tool at the Edge that can help prevent and anticipate many infrastructure problems while improving the bottom line is within reach for all.

                                                                 

                                                                                    Let it work for you

 

Share post LinkedIn