Level Up Your Logging
June 30, 2019
Level Up Your Logging
Engineer in Information TechnologyInternet Company, 501-1000 employees
Score 7 out of 10
Overall Satisfaction with Graylog
Graylog is used to aggregate logs and SNMP traps from our network devices and Linux servers. We not only aggregate and store logs but extract values to make logging more searchable than using flat files with BASH utilities (grep, cut, awk, etc) to search. For our critical devices, we also use it to forward logs to a room in our private chat service via a custom integration.
- Graylog does a great job of its core function: log aggregation, retention, and searching.
- Graylog has a very flexible configuration. The backend for storage is Elasticsearch and MongoDB is used to store the configuration. You have to option to make your configuration as simple as possible by storing everything on one box, or you can scale everything out horizontally by using a cluster of Elasticsearch nodes and MongoDB servers with several Graylog servers pointed to all the necessary nodes.
- Graylog does a good job of abstracting away a fair portion of Elasticsearch index management (sharding, creation, deletion, rotation, etc).
- Some aspects of Graylog are less than intuitive. For example, if you want to run different extractor rules on different device types due to format differences, you need to create different inputs. Since inputs are their own processes that require ports to be bound to them, you either need different IP addresses for each input or a different (read: non-standard) port, which can make the device configuration more complicated.
- Although Graylog abstracts quite a bit of Elasticsearch management away, it is by no means a turnkey solution. Upgrades to Graylog can require upgrades to Elasticsearch, which occasionally requires manual intervention to Elasticsearch. Same goes for mongo. If you're looking to scale out, there is some documentation to get you started, but the heavy lifting is on you.
- As everything is stored in Elasticsearch, there are no more flat files to tail; moving from a "traditional" logging aggregator like Syslog(-ng), a culture change is going to be required.
- We do not purchase support, so the only operational cost is that of the time it takes to maintain it.
- All the components of Graylog that we use are free and open source, so there was no capital expense other than that of servers (repurposed from another recently-decommissioned project).
- If there is a software crash that doesn't recover gracefully, it's usually something obscure that will take a while to diagnose and fix. Unless you build out a distributed and more resilient system with no single points of failure, that may have an impact on the organization or industry requirements for compliance.
We previously used Syslog(-ng) and considered moving to the Logstash with Kibana as part of the standard ELK stack as we are consumers of ELK in other scenarios, but we determined that maintaining a distributed Graylog system was going to require less work overtime for our main objective than ELK. Splunk was also briefly considered, but holy wow do you pay significantly for licensing to perform the same functionality of Graylog.
If you already have a basic understanding of Elasticsearch and/or MongoDB, Graylog will be a great fit when it comes to log aggregation. It will be a decent option even if you don't have any experience but have the time and willingness to roll up your sleeves that learning those tools will require. Graylog supports plugins to extend functionality for things like SNMP traps, telemetry collection, and solar flares. As is the case with most software with plugins, if the core functionality for which you are looking (i.e. not logging) is based on a plugin, Graylog probably isn't for you. The majority of the plugins in the marketplace are developed by third-parties looking to solve their specific use case so bug fixes and new features are not a given.