Is it possible to use several information sources all at once from the one central location ? Information these days is usually widely available, but typically all over the place with most IT staff relying on bookmarks and hundreds of username and password combinations to gain access to data. How much time and effort does your department waste on a daily basis collating information from multiple sources ? Is there a way to consolidate this information into a logical “single pane of glass” where you only need to remember one login and one URL ? Yes. And it doesn’t need to be expensive either. I’ve spent a lot of time over my career lingering the same question. Having become very frustrated with the way information appears to be stored and accessed, I decided to make my department more efficient in terms of obtaining acres to the information they require on a daily basis. Here’s how.
In this article, we will look at some of the most common and essential platform types that provide IT Operations with the visibility it needs on a daily basis to be able to run operations effectively, measure existing services, and predict where growth and expenditure need to be deployed. Over the years, I’ve become quite accustomed to running a tight ship in terms of budget, and still use a variety of these techniques today. Budgetary constraints should not be seen as a barrier to progress – it’s perfectly feasible to replace one service with another cheaper alternative, then use released capital to channel into another project. This is effectively known as the cost neutral model where no additional expense is incurred, but no real savings are made either.
Monitoring and capacity planning
Monitoring is an essential component of any network operations centre, and without it, you are running blind. The same principle applies when planning upgrades to existing technology, or attempting to determine where bottlenecks lie in order to eliminate performance issues. I’ve tried a number of monitoring platforms over the years. I started out with Cacti, which at the time was bleeding edge, and offered a nice way to view performance data based on RRDTool technology. However, over the years, Cacti became stagnated, and for my purposes, redundant. Interestingly, I wrote a number of plugins and extended code for this platform (lookup “mcutting” on the forums, and you’ll see just how active I was in this community, and if you really want to stalk me, I was also a reviewer for the Cacti book), although after a while it dawned on me that Cacti had become somewhat stale, with its front end GUI looking like something stuck in the 90s.
I quickly reached the conclusion that we needed something a little more upmarket, but didn’t want to pay for an enterprise class solution with the associated enterprise class price tag. After a fair amount of searching, I came across Observium. Even the free offering of this platform is superb, and goes way beyond Cacti’s capabilities. The subscription version at around USD 200 per year is extremely cheap given that it’s a rolling (but stable) release with a constant stream of updates. However, support for the application is relatively tough to get – mainly attributed to the attitude of the lead developer who appears to be very dismissive and really quite rude. Fortunately, I’ve never had to ask for support. If you really don’t like the idea of paying a small fee on an annual basis for what I consider to be a very capable and worthy platform, you could focus your attention on LibreNMS. This is essentially a fork of the Observium codebase, and is much more community driven. Is you’re not familiar with Linux, then LibreNMS is also available as a pre-built virtual machine – just download, give it an IP address, and you’re in business. Admittedly, I haven’t looked at LibreNMS for some time now as I’m fairly happy with Observium – I do intend to check in periodically though as there’s now a script to convert directly from Observium to LibreNMS which would be very useful.
As a side note, Cacti seems to have had a resurgence of late weighing in with a brand new interface (although sadly, the website hasn’t been updated), and a much more active community with all source code moved to Github. I did review this product again recently, but my personal preference still lies with Observium.
The point around monitoring is an important one. There’s nothing worse than being tapped on the shoulder and told there’s an issue with one of your systems – particularly when you appear to be blissfully unaware. Personally, I do not like surprises, or to be caught off guard. In fact, I’ve taken the step of making our current network status available on a large TV screen that everyone in our office can see. Similarly, I expect my staff to have the console open so they can also see what’s happening. The important component of monitoring is what to check and at what frequency. Observium is a very flexible platform, but can be overly enthusiastic when sending alerts if not configured properly. I personally prefer visual alerts as not everybody reads email. The monitoring tool (once installed) gives a detailed analysis of systems, how they perform over time, and provides a mechanism to determine where time, effort, and money need to be spent tending to poorly performing hosts.
Asset tracking and inventory
Years ago, asset registers were the must haves for an IT director. This fact hasn’t changed much over the years, although with hardware quickly depreciating, and reducing in cost, plus the addition of the cyber security burden, it’s no longer feasible to keep outdated technology for business use. Sweating assets is another standard practice, but that really only applies for the life of the running operating system. You can replace the SATA drive for an SSD, increase the memory, and fine tune the hardware in the BIOS to get the most out of a legacy system, but that is clearly negated by an operating system announced end of life. Can you honestly say to know exactly where every single laptop, computer, iPad and iPhone are within your organisation ? No – and I think you’ll find that this is the (honest) response that most people would provide. Being able to keep a track of assets is essential – not just from the asset register perspective, but also from the security angle. Consider an asset that has “gone rogue” – you know it’s IP address, so that should give you an idea of where it’s located (perhaps) but do you know who is actually using it, or last used it ?
Asset tracking can give you this information quickly – in the event of an emergency, this is almost like the Bible. However, unless the database is updated on a daily basis at least, that information can become stale very quickly, and effectively become useless. For any tracking to be effective, it should be capable of performing regular scans across the network to see what’s “out there”, and collecting a variety of other associated information, such as hardware inventory, installed software, location, user, and so on. If you want a completely free product that does this very well (although there are limitations for the free version) then look at Open-AudIT. This is a very well written and capable tool that can also perform network scans using NMAP. Device discovery and inventory is typically handled using WMI classes and a variety of other techniques to expose data. This is then compiled into an XML file which is then submitted to the server as a http post request where it is parsed, then added to the database. From the history perspective, Open-AudIT started life as a product called WINventory, although that name was already registered elsewhere so the author needed to change it. I personally used this platform for years with a high level of success and reliability. These days, I prefer lansweeper. There is a free version ideal for smaller firms, but to get all the features, it’s a paid for product. In all honesty, this has to be one of the best developed and supported platforms I’ve ever seen, and goes well beyond simple asset tracking. It’s reasonably priced too, and won’t break your budget. What you get out of the box is fantastic, and if you are looking for something that can provide useful information in a heartbeat – almost in real-time, then I strongly suggest you look at this platform. I’m not going to reinvent the wheel here – take a look at this product and I guarantee you’ll have an immediate use for it.
Ticketing, helpdesk, and issue tracking
Helpdesk and ticketing systems aren’t essential to the running of an IT department, but go a long way in providing a mechanism to track requests for assistance from users, whilst at the same time providing a form of knowledge base that can be researched when looking for the answer to a problem – particularly if it’s happened before. The helpdesk market used to be big years ago, with major players providing the best of breed software – often with a hefty price tag. There are a number of open source helpdesk and issue tracking systems available on the internet, although several of these have been sadly abandoned – mostly due to the original author either not having the time to commit to the project, or simply tired of providing it for free. I personally used ZenTrack for a number of years, and found it both flexible and accommodating at the same time. However, there were some severe limitations. Whilst this product had a parser to create a ticket out of a humble email, it didn’t support html – kind of useless given today’s accepted format which is html by default. I worked with the original developer and created a html gateway so that an email plus attachments (users love screenshots, don’t they ?) could be broken apart and a ticket created from the contents.
Once again, the ZenTrack platform codebase was abandoned, and after new versions of PHP began deprecating the older functions that the now orphaned product used, I decided to write my own helpdesk platform. Development isn’t a new thing to me – I learned to write code years ago, with the main stack I work with being LAMP. I decided to make the most of this skill set, and within around one month, I had a fully operational system – one that the firm I work for now still uses to this day on a daily basis. It’s never been published; more along the lines of closed source code using open source technologies. The cost for this particular product ? Zero. I’m considering making the codebase available on Github for people to contribute and fork if there is an appetite still for a product like this. I’m also adding project management capabilities into the platform as I personally have a need to track various different projects within my team, and also keep track of associated costs within those projects. It’s often the case that projects require “sub tasks” to be completed before you mark the project as finished. This is also being actively worked on, so the application is still actively supported by me.
IP Address Management / Config management
As stupid as this sounds, there aren’t many firms who can honestly say they maintain an up-to-date IP address list. It’s either outrageously out of date, or contains incorrect information that is of no real use to anyone in the event of information being required. Arguably, your asset tracking system should have the ability to record the IP address it discovers, but this probably won’t cover everything – particularly if a specific IP address is assigned to a device that hasn’t been powered on for a while. In this case, if the system cannot be reached, then no information will be collected. Again, you could use DHCP for everything, then use the MMC console to gain an overview of what’s assigned where. However, domain controllers and some other servers don’t like to use dynamic addressing, and recommend static. One of the best platforms I have encountered over the years to manage IP address was phpIP. Sadly, this product hasn’t been touched in years, and still relies on the (virtually) defunct MYSQL statements (you may recall a switch to MySQLi some time ago). The code is also very old, but could still realistically be used. Again, this was another open source application I found that I made several changes to – including writing in new features that I needed. The only real (dedicated) contender in terms of a lightweight and cheap solution is phpIPAM. I’m still playing with this system to see if it will fit my needs, but it looks promising so far.
Consolidation of autonomous systems
Whilst the platforms and applications I’ve mentioned in this walk-through will easily provide you with all the information you need, there is a caveat – the data is stored in multiple autonomous systems that typically will not talk to each other natively. This has been an issue for a number of years, but only recently realised and addressed. A consolidated view of your entire estate is vital for daily operational activity, so bringing these autonomous systems together in one central location is the final piece of the puzzle. There’s a number of reasons why you should do this; efficiency is one obvious response, but flexibility and extensibility are the real justifications. To me, it’s more logical to present information to who requires it in an easily digestible format than having to switch between platforms copying and pasting data. Sound ideal ? Good. The system to use for the consolidation process is WordPress. If you think about what this platform actually is, it’s easy to see why this is an obvious choice. WordPress is a CMS (Content Management System) at it’s core, and using a variety of plugins, plus a bit of imagination, is the ideal platform to provide the functionality we need. I’ve recently completed this very task, and now have something that my staff actually want to use rather than feel they have to. For the sake of your sanity (and to reduce the time you need to read this article), I’m not going to detail the exact process here, but will do so in another article if readers are interested and would like a detailed walk-through.
Over to you – interested, and want to know more ?