Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The Kafka event streaming platform is used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
N/A
Db2
Score 8.5 out of 10
N/A
DB2 is a family of relational database software solutions offered by IBM. It includes standard Db2 and Db2 Warehouse editions, either deployable on-cloud, or on-premise.
Apache Kafka is well-suited for most data-streaming use cases. Amazon Kinesis and Azure EventHubs, unless you have a specific use case where using those cloud PaAS for your data lakes, once set up well, Apache Kafka will take care of everything else in the background. Azure EventHubs, is good for cross-cloud use cases, and Amazon Kinesis - I have no real-world experience. But I believe it is the same.
I have primarily used it as the basis for a SIS - but I have migrated more than a few systems from there database systems to DB2 (Filemaker, MySQL, etc.). DB2 does have a better structural approach, as opposed to Filemaker, which allows for more data consistency, but this can also lead to an inflexibility that can sometimes be counterintuitive when attempting to compensate for the flexibility of the work environment as Schools tend to have an all in one approach.
Really easy to configure. I've used other message brokers such as RabbitMQ and compared to them, Kafka's configurations are very easy to understand and tweak.
Very scalable: easily configured to run on multiple nodes allowing for ease of parallelism (assuming your queues/topics don't have to be consumed in the exact same order the messages were delivered)
Not exactly a feature, but I trust Kafka will be around for at least another decade because active development has continued to be strong and there's a lot of financial backing from Confluent and LinkedIn, and probably many other companies who are using it (which, anecdotally, is many).
Sometimes it becomes difficult to monitor our Kafka deployments. We've been able to overcome it largely using AWS MSK, a managed service for Apache Kafka, but a separate monitoring dashboard would have been great.
Simplify the process for local deployment of Kafka and provide a user interface to get visibility into the different topics and the messages being processed.
Learning curve around creation of broker and topics could be simplified
The DB2 database is a solid option for our school. We have been on this journey now for 3-4 years so we are still adapting to what it can do. We will renew our use of DB2 because we don’t see. Major need to change. Also, changing a main database in a school environment is a major project, so we’ll avoid that if possible.
Apache Kafka is highly recommended to develop loosely coupled, real-time processing applications. Also, Apache Kafka provides property based configuration. Producer, Consumer and broker contain their own separate property file
You have to be well versed in using the technology, not only from a GUI interface but from a command line interface to successfully use this software to its fullest.
I have never had DB2 go down unexpectedly. It just works solidly every day. When I look at the logs, sometimes DB2 has figured out there was a need to build an index. Instead of waiting for me to do it, the database automatically created the index for me. At my current company, we have had zero issues for the past 8 years. We have upgrade the server 3 times and upgraded the OS each time and the only thing we saw was that DB2 got better and faster. It is simply amazing.
The performances are exceptional if you take care to maintain the database. It is a very powerful tool and at the same time very easy to use. In our installation, we expect a DB machine on the mainframe with access to the database through ODBC connectors directly from branch servers, with fabulous end users experience.
Support for Apache Kafka (if willing to pay) is available from Confluent that includes the same time that created Kafka at Linkedin so they know this software in and out. Moreover, Apache Kafka is well known and best practices documents and deployment scenarios are easily available for download. For example, from eBay, Linkedin, Uber, and NYTimes.
Easily the best product support team. :) Whenever we have questions, they have answered those in a timely manner and we like how they go above and beyond to help.
I used other messaging/queue solutions that are a lot more basic than Confluent Kafka, as well as another solution that is no longer in the market called Xively, which was bought and "buried" by Google. In comparison, these solutions offer way fewer functionalities and respond to other needs.
DB2 was more scalable and easily configurable than other products we evaluated and short listed in terms of functionality and pricing. IBM also had a good demo on premise and provided us a sandbox experience to test out and play with the product and DB2 at that time came out better than other similar products.
By using DB2 only to support my IzPCA activities, my knowledge here is somewhat limited.
Anyway, from what I was able to understand, DB2 is extremely scallable.
Maybe the information below could serve as an example of scalability.
Customer have an huge mainframe environment, 13x z15 CECs, around 80 LPARs, and maybe more than 50 Sysplexes (I am not totally sure about this last figure...)
Today we have 7 IzPCA databases, each one in a distinct Syplex.
Plans are underway to have, at the end, an small LPAR, with only one DB2 sub-system, and with only one database, then transmit the data from a lot of other LPARs, and then process all the data in this only one database.
The IzPCA collect process (read the data received, manipulate it, and insert rows in the tables) today is a huge process, demanding many elapsed hours, and lots of CPU.
Almost 100% of the tables are PBR type, insert jobs run in parallel, but in 4 of the 7 database, it is a really a huge and long process.
Combining the INSERTs loads from the 7 databases in only one will be impossible.......,,,,
But, IzPCA recently introduced a new feature, called "Continuous Collector".
By using that feature, small amounts of data will be transmited to the central LPAR at every 5 minutes (or even less), processed immediately,in a short period of time, and withsmall use of CPU, instead of one or two transmissions by day, of very large amounts of data and the corresponding collect jobs occurring only once or twice a day, with long elapsed times, and huge comsumption of CPU
I suspect the total CPU seconds consumed will be more or less the same in both cases, but in the new method it will occur insmall bursts many times a day!!
Positive: Get a quick and reliable pub/sub model implemented - data across components flows easily.
Positive: it's scalable so we can develop small and scale for real-world scenarios
Negative: it's easy to get into a confusing situation if you are not experienced yet or something strange has happened (rare, but it does). Troubleshooting such situations can take time and effort.