ClickHouse is an open-source, column-oriented OLAP database system enabling real-time analytical reports using SQL queries. With linear scalability, it handles trillions of rows and petabytes of data. ClickHouse Cloud offers a scalable serverless solution for real-time analytics.
N/A
MongoDB
Score 8.7 out of 10
N/A
MongoDB is an open source document-oriented database system. It is part of the NoSQL family of database systems. Instead of storing data in tables as is done in a "classical" relational database, MongoDB stores structured data as JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster.
$0.10
million reads
Pricing
ClickHouse
MongoDB
Editions & Modules
No answers on this topic
Shared
$0
per month
Serverless
$0.10million reads
million reads
Dedicated
$57
per month
Offerings
Pricing Offerings
ClickHouse
MongoDB
Free Trial
Yes
Yes
Free/Freemium Version
Yes
Yes
Premium Consulting/Integration Services
Yes
No
Entry-level Setup Fee
Optional
No setup fee
Additional Details
Pay for what is used:
It automatically scales up and down compute resources based on the user's workload
It scales storage and compute separately
It automatically scales unused resources down to zero so that users don’t pay for idle services
Fully managed, global cloud database on AWS, Azure, and GCP
More Pricing Information
Community Pulse
ClickHouse
MongoDB
Considered Both Products
ClickHouse
Verified User
Engineer
Chose ClickHouse
ClickHouse was not compared to them as a competitor but as the ideal partner to complete an information analysis system, providing users with the most complete and efficient tools. Therefore, in this case it was considered that it would be the ideal candidate due to its …
The most important thing when using ClickHouse is to be clear that the scenarios in which you want to use it really are the right ones. Many users think that when a database is very fast for a specific use case, it can be extrapolated to other contexts (most of the time different) in which a previous analysis has not been carried out.
ClickHouse is an analytical database, as such, it should be used for such purposes, where the information is stored correctly, the data volumes are really large and the queries to be performed are not the typical traditional queries on several columns with multiple aggregations. ClickHouse is not the solution for this.
On the other hand, if your case is not one of the above, it is quite possible that ClickHouse can help you. Where ClickHouse shines is when you are looking for aggregation over a particular column in large volumes of data.
If asked by a colleague I would highly recommend MongoDB. MongoDB provides incredible flexibility and is quick and easy to set up. It also provides extensive documentation which is very useful for someone new to the tool. Though I've used it for years and still referenced the docs often. From my experience and the use cases I've worked on, I'd suggest using it anywhere that needs a fast, efficient storage space for non-relational data. If a relational database is needed then another tool would be more apt.
Their MergeTree table engine provide impressive performance for data insert in bulk
Not only data insert but also the way MergeTree engine uses Primary Keys to sort the data and perform data skipping based on the granules its also their secret for ridiculous fast queries
Data compression its also great
They provide especial table engines that allow you to read data directly from other sources like S3
Since its written with C++ you have very granular data types and especial ones like enum, LowCardinality and etc, they save you a lot of storage since are stored as integer values
ClickHouse functions besides the ones that respect ANSI Standards are also awesome and useful
Being a JSON language optimizes the response time of a query, you can directly build a query logic from the same service
You can install a local, database-based environment rather than the non-relational real-time bases such a firebase does not allow, the local environment is paramount since you can work without relying on the internet.
Forming collections in Mango is relatively simple, you do not need to know of query to work with it, since it has a simple graphic environment that allows you to manage databases for those who are not experts in console management.
An aggregate pipeline can be a bit overwhelming as a newcomer.
There's still no real concept of joins with references/foreign keys, although the aggregate framework has a feature that is close.
Database management/dev ops can still be time-consuming if rolling your own deployments. (Thankfully there are plenty of providers like Compose or even MongoDB's own Atlas that helps take care of the nitty-gritty.
I am looking forward to increasing our SaaS subscriptions such that I get to experience global replica sets, working in reads from secondaries, and what not. Can't wait to be able to exploit some of the power that the "Big Boys" use MongoDB for.
NoSQL database systems such as MongoDB lack graphical interfaces by default and therefore to improve usability it is necessary to install third-party applications to see more visually the schemas and stored documents. In addition, these tools also allow us to visualize the commands to be executed for each operation.
Finding support from local companies can be difficult. There were times when the local company could not find a solution and we reached a solution by getting support globally. If a good local company is found, it will overcome all your problems with its global support.
While the setup and configuration of MongoDB is pretty straight forward, having a vendor that performs automatic backups and scales the cluster automatically is very convenient. If you do not have a system administrator or DBA familiar with MongoDB on hand, it's a very good idea to use a 3rd party vendor that specializes in MongoDB hosting. The value is very well worth it over hosting it yourself since the cost is often reasonable among providers.
ClickHouse outperforms, especially in costs, since its compression/indexing engines are so smart, and even with very low computing power, you can already perform huge analyses of the data.
We have [measured] the speed in reading/write operations in high load and finally select the winner = MongoDBWe have [not] too much data but in case there will be 10 [times] more we need Cassandra. Cassandra's storage engine provides constant-time writes no matter how big your data set grows. For analytics, MongoDB provides a custom map/reduce implementation; Cassandra provides native Hadoop support.
Open Source w/ reasonable support costs have a direct, positive impact on the ROI (we moved away from large, monolithic, locked in licensing models)
You do have to balance the necessary level of HA & DR with the number of servers required to scale up and scale out. Servers cost money - so DR & HR doesn't come for free (even though it's built into the architecture of MongoDB