Sorry, this product's description is unavailable
- No setup fee
- Free Trial
- Free/Freemium Version
- Premium Consulting/Integration Services
6 people also want pricing
Datadog is a monitoring service for IT, Dev and Ops teams who write and run applications at scale, and want to turn the massive amounts of data produced by their apps, tools and services into actionable insight.
LogicMonitor provides an agentless SaaS-based monitoring platform. LogicMonitor provides prebuilt integrations and an open API, and is designed to provide monitoring across networks, servers, applications, websites, and containers, including insights and reporting capabilities.
logstash demo (hundreds of servers, 7-node cluster, websockets, ruby hacks)
Arista/Kafka/Elastic integration demo
logstash demo piping log files to websockets.
Kibana, Logstash, Elastic - a quick demo of the visualisation tools
Real TIme Python Log Ingestion with Logstash & elk and Visualize Logs on Kibana | Demo & Code
Install Elasticsearch, Kibana, Logstash, and Filebeat using Docker Compos
- Tech Details
- Modern: most Admin, Server and/or DevtyOps-Centric software worth it's salt will have the ability to configure it's services and features from a small webpage and REST API. Logstash is no exception
- Speed: Logstash configuration is just a reload away. While you CAN use the gui (see point above), editing the configuration files directly is also a great option. Our configuration files are hosted on an internal Repository, that once we make a change, we and track them as we do a reload, and those changes are reflected in Logstash almost immediately (dependent on the Data Source's speed and flow of Data)
- Configuration: Logstash is very simple to configure, and fulfills our desire to keep configuration files in a plantext format.
- OpenSource friendly: Logstash is opensource, and built with open source tools
- Memory: Logstash is a HOG, if you are deploying it on commodity (i.e. cheap and old) hardware: You will need at least 2GB, just for Logstash. So don't expect to run your entire ELK stack on one AMD Athlon machine.
- Overlap: Logstash fills in an area of the ELK stack that makes the most sense: as a log file transformer / shipper. However, if you start breaking that stack, with the addition of other components- you start seeing where features of Logstash may be implemented or solved in the additional components much easier (or better, or to a higher degree of resolution)
- More Overlap: Since my team employs Syslog-ng extensively- Logstash can sometimes get in the way (and this may be a problem for DevOps stacks overall): You can configure Syslog to record certain information from a source, filter that data, and even export that data in a particular format. Logstash will pick that data up, and then parse it. However, if you don't keep your Syslog-ng configuration files, and your Logstash configuration files in sync, your results will not be what you expected, and this will translate into (sometimes) hours/days of work, hunting down a line item in a configuration file.
- Positive: LogStash is OpenSource. While this should not be directly construed as Free, it's a great start towards Free. OpenSource means that while it's free to download, there are no regular patch schedules, no support from a company, no engineer you can get on the phone / email to solve a problem. You are your own Engineer. You are your own Phone Call. You are your own ticketing system.
- Negative: Since Logstash's features are so extensive, you will often find yourself saying "I can just solve this problem better going further down / up the Stack!". This is not a BAD quality, necessarily and it really only depends on what Your Project's Aim is.
- Positive: LogStash is a dream to configure and run. A few hours of work, and you are on your way to collecting and shipping logs to their required addresses!
Elasticsearch is an obvious inclusion: Using Logstash with it's native DevOps stack its really rational
- Plugin ecosystem allows modular extensions.
- Tight integration into the Elastic.com products of Beats and Elasticsearch, so minimal setup is required when using those tools.
- Filter plugins are powerful for extracting and enriching input data.
- Since it's a Java product, JVM tuning must be done for handling high-load.
- The persistent queue feature is nice, but I feel like most companies would want to use Kafka as a general storage location for persistent messages for all consumers to use. Using some pipeline of "Kafka input -> filter plugins -> Kafka output" seems like a good solution for data enrichment without needing to maintain a custom Kafka consumer to accomplish a similar feature.
- I would like to see more documentation around creating a distributed Logstash cluster because I imagine for high ingestion use cases, that would be necessary.
- Logstash has allowed me to ingest log files of various patterns into Elasticsearch for analysis using its flexible Grok parser.
- I've been able to perform web analytics over datasets using Logstash's GeoIP and reverse DNS lookups.
- By providing a simple mechanism for adding plugins, Logstash has allowed me to install extensions on top of those already pre-installed.
- Logstash design is definitely perfect for the use case of ELK. Logstash has "drivers" using which it can inject from virtually any source. This takes the headache from source to implement those "drivers" to store data to ES.
- Logstash is fast, very fast. As per my observance, you don't need more than 1 or 2 servers for even big size projects.
- Data in different shape, size, and formats? No worries, Logstash can handle it. It lets you write simple rules to programmatically take decisions real-time on data.
- You can change your data on the fly! This is the CORE power of Logstash. The concept is similar to Kafka streams, the difference being the source and destination are application and ES respectively.
- Logstash is all command line, and it can become overwhelming for new developers. If it has any sort of UI, then I don't know about it.
- Documentation could have been better. But this is a work in progress, and with time I am sure community will help with documentation.
- Community support! Being a relatively new tool, the adoption is still mature, and finding answers can be challenging sometimes.
It may not be appropriate to analyze data-sets dependent on each other but from a different data source. Reason being Logstash works on data at hand, and not wait for other data to arrive. It would be unwise for Logstashh to handle complicated, long-running transformations because this is injected and ejected. The faster you do it, the safer.
- Positive: Learning curve was relatively easy for our team. We were up and running within a sprint.
- Positive: Managing Logstash has generally been easy. We configure it, and usually, don't have to worry about misbehavior.
- Negative: Updating/Rehydrating Logstash servers have been little challenging. We sometimes even loose data while Logstash is down. It requires more in-depth research and experiments to figure the fine-grained details.
- Negative: This is now one more application/skill/server to manage. Like any other servers, it requires proper grooming or else you will get in trouble. This is also a single point of failure which can have the ability to make other servers useless if it is not running.