Logstash can be compared to other ETL frameworks or tools, but it is also complementary to several, for example, Kafka. I would not only suggest using Logstash when the rest of the ELK stack is available, but also for a self-hosted event collection pipeline for various …
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
Perfect for projects where Elasticsearch makes sense: if you decide to employ ES in a project, then you will almost inevitably use LogStash, and you should anyways. Such projects would include: 1. Data Science (reading, recording or measure web-based Analytics, Metrics) 2. Web Scraping (which was one of our earlier projects involving LogStash) 3. Syslog-ng Management: While I did point out that it can be a bit of an electric boo-ga-loo in finding an errant configuration item, it is still worth it to implement Syslog-ng management via LogStash: being able to fine-tune your log messages and then pipe them to other sources, depending on the data being read in, is incredibly powerful, and I would say is exemplar of what modern Computer Science looks like: Less Specialization in mathematics, and more specialization in storing and recording data (i.e. Less Engineering, and more Design).
Logstash design is definitely perfect for the use case of ELK. Logstash has "drivers" using which it can inject from virtually any source. This takes the headache from source to implement those "drivers" to store data to ES.
Logstash is fast, very fast. As per my observance, you don't need more than 1 or 2 servers for even big size projects.
Data in different shape, size, and formats? No worries, Logstash can handle it. It lets you write simple rules to programmatically take decisions real-time on data.
You can change your data on the fly! This is the CORE power of Logstash. The concept is similar to Kafka streams, the difference being the source and destination are application and ES respectively.
Since it's a Java product, JVM tuning must be done for handling high-load.
The persistent queue feature is nice, but I feel like most companies would want to use Kafka as a general storage location for persistent messages for all consumers to use. Using some pipeline of "Kafka input -> filter plugins -> Kafka output" seems like a good solution for data enrichment without needing to maintain a custom Kafka consumer to accomplish a similar feature.
I would like to see more documentation around creating a distributed Logstash cluster because I imagine for high ingestion use cases, that would be necessary.
If the team looking to use Apache Spark is not used to debug and tweak settings for jobs to ensure maximum optimizations, it can be frustrating. However, the documentation and the support of the community on the internet can help resolve most issues. Moreover, it is highly configurable and it integrates with different tools (eg: it can be used by dbt core), which increase the scenarios where it can be used
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.
MongoDB and Azure SQL Database are just that: Databases, and they allow you to pipe data into a database, which means that alot of the log filtering becomes a simple exercise of querying information from a DBMS. However, LogStash was chosen for it's ease of integration into our choice of using ELK Elasticsearch is an obvious inclusion: Using Logstash with it's native DevOps stack its really rational
Positive: Learning curve was relatively easy for our team. We were up and running within a sprint.
Positive: Managing Logstash has generally been easy. We configure it, and usually, don't have to worry about misbehavior.
Negative: Updating/Rehydrating Logstash servers have been little challenging. We sometimes even loose data while Logstash is down. It requires more in-depth research and experiments to figure the fine-grained details.
Negative: This is now one more application/skill/server to manage. Like any other servers, it requires proper grooming or else you will get in trouble. This is also a single point of failure which can have the ability to make other servers useless if it is not running.