When automobiles were first built, measuring the speed they could travel was a detail not built into the engineering process. After more than 100 years, a way to measure an automobile’s speed is not only standard, it is automatically built into the build process. As are other ways to measure the status or health of an automobile, like temperature of the oil, voltage of the battery, and, now with modern sensors, inflation of the tires and proximity of things around the automobile.
Building hardware and software solutions for the modern age was similarly ‘cowboy’ at first with little effort to understand the status of the components of the system. But as hardware and software solutions are becoming ingrained in our culture, daily lives, and business, understanding the status or health of these systems will determine if a business succeeds or fails.
As British mathematician and physicist William Thomson, Lord Kelvin said, “If you cannot measure it, you cannot improve it.”
Monitoring of modern hardware and software systems has been a siloed practice in the past just as the hardware and software has been siloed. Each system might have a way to measure how it is running but who had access to the system monitoring was limited not only by what data the system provided but also by resources who could understand the data. This brings me back to the late 1990s when capacity planning for a Large Agriculture Company’s network connections was done in an excel spreadsheet populated by people manually looking at device counters and recording readings.
Most organizations that have embraced modern computing to drive their business understand that measuring results is important and can make or break their future. The issue is how to do this. Looking at the bottom line has been a general way to measure results of a business but is just one indicator. To Lord Kelvin’s point it is difficult to improve if all the indicators that lead up to a result (in this case revenue) are not measured. This is where technology can really change the way organizations conduct their business not just in delivering their services but also measuring the components that enable service delivery.
Monitoring modern information technology systems has evolved over the past 40 years and recently took on the phrase Observability.
Observability is more or less a way to grab data from the different system’s monitoring interfaces or counters either through APIs (Application Programming Interface) or other programmatic measures and mix them with other data to get a better idea of the status or health of a system as a whole. This is a very broad simplification as some systems can be a challenge to gather data from, but the intent is to get everything possible so that organizations can make intelligent business decisions.
So, we get the data, make smart decisions and everything is great… right? Well… the next challenge is to have developers and engineers implement the needed “sensors,” i.e., software and/or methodologies, to enable the gathering of this data. If your responsibility is to build and innovate, monitoring may be an afterthought just as it was with the first automobiles.
This is where automation can really help. By building Observability into the build/development process via automation, implementors do not need to think about where and how to get metrics, the software is already injected into the system to extract data that will enable people like Data Scientists to do interesting things and understand the status and health of the system and drive business innovation.
In other words, if organizations build the “speedometer” or sensor into every part of their system (whether it be hardware or software) then they will have all of the data and metrics they need and/or want to improve their processes and ultimately provide their services to their clients more efficiently – AND increase their bottom line.
Learn more about Evolving Solutions’ Enterprise Monitoring & Analytics practice, click here.