ClickHouse is an open-source, column-oriented OLAP database system enabling real-time analytical reports using SQL queries. With linear scalability, it handles trillions of rows and petabytes of data. ClickHouse Cloud offers a scalable serverless solution for real-time analytics.
N/A
Db2
Score 8.7 out of 10
N/A
DB2 is a family of relational database software solutions offered by IBM. It includes standard Db2 and Db2 Warehouse editions, either deployable on-cloud, or on-premise.
$0
Pricing
ClickHouse
Db2
Editions & Modules
No answers on this topic
Db2 on Cloud Lite
$0
Db2 on Cloud Standard
$99
per month
Db2 Warehouse on Cloud Flex One
$898
per month
Db2 on Cloud Enterprise
$946
per month
Db2 Warehouse on Cloud Flex for AWS
2,957
per month
Db2 Warehouse on Cloud Flex
$3,451
per month
Db2 Warehouse on Cloud Flex Performance
13,651
per month
Db2 Warehouse on Cloud Flex Performance for AWS
13,651
per month
Db2 Standard Edition
Contact Sales
Db2 Advanced Edition
Contact Sales
Offerings
Pricing Offerings
ClickHouse
Db2
Free Trial
Yes
Yes
Free/Freemium Version
Yes
Yes
Premium Consulting/Integration Services
Yes
Yes
Entry-level Setup Fee
Optional
Optional
Additional Details
Pay for what is used:
It automatically scales up and down compute resources based on the user's workload
It scales storage and compute separately
It automatically scales unused resources down to zero so that users don’t pay for idle services
The most important thing when using ClickHouse is to be clear that the scenarios in which you want to use it really are the right ones. Many users think that when a database is very fast for a specific use case, it can be extrapolated to other contexts (most of the time different) in which a previous analysis has not been carried out.
ClickHouse is an analytical database, as such, it should be used for such purposes, where the information is stored correctly, the data volumes are really large and the queries to be performed are not the typical traditional queries on several columns with multiple aggregations. ClickHouse is not the solution for this.
On the other hand, if your case is not one of the above, it is quite possible that ClickHouse can help you. Where ClickHouse shines is when you are looking for aggregation over a particular column in large volumes of data.
I have primarily used it as the basis for a SIS - but I have migrated more than a few systems from there database systems to DB2 (Filemaker, MySQL, etc.). DB2 does have a better structural approach, as opposed to Filemaker, which allows for more data consistency, but this can also lead to an inflexibility that can sometimes be counterintuitive when attempting to compensate for the flexibility of the work environment as Schools tend to have an all in one approach.
Their MergeTree table engine provide impressive performance for data insert in bulk
Not only data insert but also the way MergeTree engine uses Primary Keys to sort the data and perform data skipping based on the granules its also their secret for ridiculous fast queries
Data compression its also great
They provide especial table engines that allow you to read data directly from other sources like S3
Since its written with C++ you have very granular data types and especial ones like enum, LowCardinality and etc, they save you a lot of storage since are stored as integer values
ClickHouse functions besides the ones that respect ANSI Standards are also awesome and useful
The DB2 database is a solid option for our school. We have been on this journey now for 3-4 years so we are still adapting to what it can do. We will renew our use of DB2 because we don’t see. Major need to change. Also, changing a main database in a school environment is a major project, so we’ll avoid that if possible.
You have to be well versed in using the technology, not only from a GUI interface but from a command line interface to successfully use this software to its fullest.
I have never had DB2 go down unexpectedly. It just works solidly every day. When I look at the logs, sometimes DB2 has figured out there was a need to build an index. Instead of waiting for me to do it, the database automatically created the index for me. At my current company, we have had zero issues for the past 8 years. We have upgrade the server 3 times and upgraded the OS each time and the only thing we saw was that DB2 got better and faster. It is simply amazing.
The performances are exceptional if you take care to maintain the database. It is a very powerful tool and at the same time very easy to use. In our installation, we expect a DB machine on the mainframe with access to the database through ODBC connectors directly from branch servers, with fabulous end users experience.
Easily the best product support team. :) Whenever we have questions, they have answered those in a timely manner and we like how they go above and beyond to help.
ClickHouse outperforms, especially in costs, since its compression/indexing engines are so smart, and even with very low computing power, you can already perform huge analyses of the data.
DB2 was more scalable and easily configurable than other products we evaluated and short listed in terms of functionality and pricing. IBM also had a good demo on premise and provided us a sandbox experience to test out and play with the product and DB2 at that time came out better than other similar products.
By using DB2 only to support my IzPCA activities, my knowledge here is somewhat limited.
Anyway, from what I was able to understand, DB2 is extremely scallable.
Maybe the information below could serve as an example of scalability.
Customer have an huge mainframe environment, 13x z15 CECs, around 80 LPARs, and maybe more than 50 Sysplexes (I am not totally sure about this last figure...)
Today we have 7 IzPCA databases, each one in a distinct Syplex.
Plans are underway to have, at the end, an small LPAR, with only one DB2 sub-system, and with only one database, then transmit the data from a lot of other LPARs, and then process all the data in this only one database.
The IzPCA collect process (read the data received, manipulate it, and insert rows in the tables) today is a huge process, demanding many elapsed hours, and lots of CPU.
Almost 100% of the tables are PBR type, insert jobs run in parallel, but in 4 of the 7 database, it is a really a huge and long process.
Combining the INSERTs loads from the 7 databases in only one will be impossible.......,,,,
But, IzPCA recently introduced a new feature, called "Continuous Collector".
By using that feature, small amounts of data will be transmited to the central LPAR at every 5 minutes (or even less), processed immediately,in a short period of time, and withsmall use of CPU, instead of one or two transmissions by day, of very large amounts of data and the corresponding collect jobs occurring only once or twice a day, with long elapsed times, and huge comsumption of CPU
I suspect the total CPU seconds consumed will be more or less the same in both cases, but in the new method it will occur insmall bursts many times a day!!