Why Modernize Your Legacy Monitoring?
April 25, 2022 Ash Giddings
Modernization is everywhere at present, with teams actively looking to bring their business into the 21st century by transforming applications, framework, underlying platform bases, and supporting software.
The underlying aim of any modernization strategy will vary from business to business, although popular drivers include the need to become more agile in an effort to adapt quicker to the needs of the business, to improve efficiency, and to simplify operations. This exercise provides the opportunity to also free up resources and to address the impact of dwindling expertise, evident in many areas of the market.
Certain to be included in any modernization initiative will be some form of cloud adoption, the move obvious candidates for this being:
- Workloads with variable or unknown resource demands. Cloud was born out of the retail sector having peaks and dips in business, so having built-in elasticity can be attractive to some. Elasticity also provides the capability to switch from traditional capital expenditure to operational expenditure while at the same time allowing alignment with revenue.
- Applications where on-premise provides no tangible benefit. Unless there is legal requirement, or there is a need for the unmatched low latency that on-premise delivers, there are few winning arguments nowadays for running applications locally.
- Data with the fewest regulatory restrictions. Data sovereignty, data localization, and data residency can be particularly challenging for some, and so applications and associated data where these restrictions either do not apply or are minimal can be attractive contenders for cloud adoption.
But what about monitoring modernization, shouldn’t that be on the agenda, too? Monitoring has come a long way over the years from basic event and resource alerting, to being able to administer some intelligence to rules as well as applying automation to specific monitored situations. Today’s monitoring solutions also have the capability of being able to apply different rules based on day, date, and time, and can cater for scheduled downtime.
Most support teams have matured well beyond manual monitoring and have a solution of sorts in place, normally falling into one of three categories:
- A collection of inadequate, difficult to support in-house written scripts that were coded some time ago which have long-since been outgrown due to their inflexibility, lack of auditability and cumbersome nature.
- Heavyweight legacy products that have been in place for a number of years and while they perform an acceptable job, have received very few, if any, recent enhancements. These types of products are stable and while still popular they probably wouldn’t be chosen today.
- An open source alternative, such as Nagios or Zabbix. While these solutions have a place, they are often found to be script heavy, with a focus on generic monitoring and provide very limited methods of automation. Forum-based support is available, normally backed up by metered email / telephone support.
Every support team irrespective of monitoring maturity reaches a tipping point. Sometimes the point is reached because of server sprawl and the sheer number of servers that require managing, or maybe there are some complex monitoring requirements, while at other times it’s the variety servers that require supporting, some of which rely on skills that are not in abundance.
Increasingly challenging is having to cope with a modern hybrid cloud mix while still managing legacy environments. Re-evaluating your current monitoring requirements, predicting what you’ll need in the future, and migrating to a modern cloud-based solution has several standout out tangible benefits:
- Moving the overhead of monitoring and alert processing away from the monitored server(s), and to the cloud where rule checking and associated notifications are handled means that there is minimal performance impact on the server(s) being monitored.
- There is no hardware requirement for either the central console, or for alerting notifications. Cloud-based monitoring solutions tend to have either a direct or proxy-based SSL connection between the server being monitored and the cloud console.
- Patching, backing up, and securing the server where the central console resides also becomes a thing of the past. It becomes somebody else’s problem.
- By choosing a tool that has a browser at its heart, as opposed to a command line interface or worse still scripts, means that potentially more junior staff can get accustomed to and manage unfamiliar servers, at least in part. A major plus in efforts to reduce the impact of skills depletion.
- Built-in high availability. On premise solutions, complete with their databases, likely to be SQL or an open source equivalent require you to build some kind of resilience with in-house skills and tools. Cloud-based solutions are designed with high availability in mind and often utilize a multi-cloud approach with automatic failover in between.
- No VPNs required. Where multiple customers or tenants are being supported, the setup and ongoing management of VPNs can be challenging, especially for Managed Service Providers. Cloud-based monitoring normally relies on SSL connectivity which simplifies multi-tenant support somewhat while at the same time maintaining secure channels for monitoring and alerting.
- Flexible pricing. Subscription based pricing is commonplace in cloud solutions and is much more in line with today’s server monitoring demands, and far more suitable to the modern application deployment model now evident in the market.
Over the years as server estates have grown and become more diverse, the number of monitoring tools that are run in the business is likely to have increased, too. All too often, teams end up with a selection of products with many overlapping features, all of which have maintenance or subscription charges.
Re-evaluating monitoring toolsets enables you to look for one that offers true multi-platform support, including the ability to monitor those devices that maybe aren’t intelligent enough to house an agent, but that are critical to your environment, such as hubs, routers, switches, printers etc. Issues that could impact the business can occur in hardware, operating system and within the applications themselves, and so your tool of choice should both fit your requirements today and be able to support tomorrow by having the flexibility and capability to monitor literally anything.
Also critical in the modern age is being able to provide role-based access to your monitoring solution, giving visibility and control to those trusted individuals and teams. Those tasked with support are seldom sat in front of a central console: it’s more likely that those individuals will be in and around the business, working on other priorities or even focused on project-based work. And so having the flexibility to escalate exceptions through the support hierarchy, coupled with the capability to respond via various devices, plus service desk integration is imperative in efforts toward the goal of rapid issue resolution.
Underpinning your chosen monitoring tool should be a comprehensive reporting engine enabling you to report on all aspects of service delivery.
Now is the right time to modernize your monitoring by moving it to the cloud. You made a good decision back then, and it’s time for another one now.
This content is sponsored by Maxava.
Ash Giddings is a product manager at Maxava and an IBM Champion 2022.
Mi8 is a trademark of Furasta, a sister company of Maxava.