Likelihood to Recommend Apache Kafka is well-suited for most data-streaming use cases. Amazon Kinesis and Azure EventHubs, unless you have a specific use case where using those cloud PaAS for your data lakes, once set up well, Apache Kafka will take care of everything else in the background. Azure EventHubs, is good for cross-cloud use cases, and Amazon Kinesis - I have no real-world experience. But I believe it is the same.
Read full review Majorly for all Batch and Streaming Scenarios we are designing StreamSets pipelines, few best suited and tried out use cases below : 1. JDBC to ADLS data transfer based on source refresh frequency. 2. Kafka to GCS. 3. Kafka to Azure Event. 4. Hub HDFS to ADLS data transfer. 5. Schema generation to generate Avro. The easy to design Canvas, Scheduling Jobs, Fragment creation and utilization, an inbuilt wide range of Stage availability makes it an even more favorable tool for me to design data engineering pipelines.
Read full review Pros Really easy to configure. I've used other message brokers such as RabbitMQ and compared to them, Kafka's configurations are very easy to understand and tweak. Very scalable: easily configured to run on multiple nodes allowing for ease of parallelism (assuming your queues/topics don't have to be consumed in the exact same order the messages were delivered) Not exactly a feature, but I trust Kafka will be around for at least another decade because active development has continued to be strong and there's a lot of financial backing from Confluent and LinkedIn, and probably many other companies who are using it (which, anecdotally, is many). Read full review A easy to use canvas to create Data Engineering Pipeline. A wide range of available Stages ie. Sources, Processors, Executors, and Destinations. Supports both Batch and Streaming Pipelines. Scheduling is way easier than cron. Integration with Key-Vaults for Secrets Fetching. Read full review Cons Sometimes it becomes difficult to monitor our Kafka deployments. We've been able to overcome it largely using AWS MSK, a managed service for Apache Kafka, but a separate monitoring dashboard would have been great. Simplify the process for local deployment of Kafka and provide a user interface to get visibility into the different topics and the messages being processed. Learning curve around creation of broker and topics could be simplified Read full review Monitoring/Visualization can be improvised and enhanced a lot (e.g. to monitor a Job to see what happened 7 days back with data transfer). The logging mechanism can be simplified (Logs can be filtered with "ERROR", "DEBUG", "ALL" etc but still takes some time to get familiar for understanding). Auto Scalability for heavy load transfer (Taking much time for >5 million record transfer from JDBC to ADLS destination in Avro file transfer). There should be a concept of creating Global variables which is missing. Read full review Likelihood to Renew Kafka is quickly becoming core product of the organization, indeed it is replacing older messaging systems. No better alternatives found yet
Read full review Usability Apache Kafka is highly recommended to develop loosely coupled, real-time processing applications. Also, Apache Kafka provides property based configuration. Producer, Consumer and broker contain their own separate property file
Read full review Support Rating Support for Apache Kafka (if willing to pay) is available from Confluent that includes the same time that created Kafka at Linkedin so they know this software in and out. Moreover, Apache Kafka is well known and best practices documents and deployment scenarios are easily available for download. For example, from eBay, Linkedin, Uber, and NYTimes.
Read full review Alternatives Considered I used other messaging/queue solutions that are a lot more basic than Confluent Kafka, as well as another solution that is no longer in the market called Xively, which was bought and "buried" by Google. In comparison, these solutions offer way fewer functionalities and respond to other needs.
Read full review StreamSets is a one-stop solution to design Data engineering Pipelines and doesn't require deep Programming knowledge, It's so user-friendly that anyone in Team can contribute to the Idea of pipeline design. In
Hadoop One has to be programming proficient to use its various components like Hive, HDFS, Kafka, etc but in StreamSets all these stages are built-in and ready to use with minor configuration.
Read full review Return on Investment Positive: Get a quick and reliable pub/sub model implemented - data across components flows easily. Positive: it's scalable so we can develop small and scale for real-world scenarios Negative: it's easy to get into a confusing situation if you are not experienced yet or something strange has happened (rare, but it does). Troubleshooting such situations can take time and effort. Read full review Simplified Improvised Overall data ingestion and Integration Process. Support to various Hetrogenous Source systems like RDBMS< Kafka, Salesforce, Key Vault. Secure, easy to launch Integration tool. Read full review ScreenShots