ClickHouse is an open-source, column-oriented OLAP database system enabling real-time analytical reports using SQL queries. With linear scalability, it handles trillions of rows and petabytes of data. ClickHouse Cloud offers a scalable serverless solution for real-time analytics.
N/A
SingleStore
Score 8.3 out of 10
N/A
SingleStore aims to enable organizations to scale from one to one million customers, handling SQL, JSON, full text and vector workloads in one unified platform.
$0.69
per hour
Pricing
ClickHouse
SingleStore
Editions & Modules
No answers on this topic
OnDemand
$0.69
per hour
Offerings
Pricing Offerings
ClickHouse
SingleStore
Free Trial
Yes
Yes
Free/Freemium Version
Yes
Yes
Premium Consulting/Integration Services
Yes
Yes
Entry-level Setup Fee
Optional
Optional
Additional Details
Pay for what is used:
It automatically scales up and down compute resources based on the user's workload
It scales storage and compute separately
It automatically scales unused resources down to zero so that users don’t pay for idle services
We evaluated SingleStore against MySQL, PostgreSQL, and Druid. We have also quickly looked at ClickHouse and Pinot. We found SingleStore was a polished installation and operation was a breeze. That, coupled with the great performance, led us to select SingleStore very quickly …
Reduces database sprawl, ETL costs, infrastructure expenses, etc. Supports horizontal scaling, unlike PostgreSQL & Aurora, and real-time analytics and fast transactions (HTAP), unlike Snowflake & ClickHouse.Handles high-volume workloads with thousands of concurrent queries. No …
The most important thing when using ClickHouse is to be clear that the scenarios in which you want to use it really are the right ones. Many users think that when a database is very fast for a specific use case, it can be extrapolated to other contexts (most of the time different) in which a previous analysis has not been carried out.
ClickHouse is an analytical database, as such, it should be used for such purposes, where the information is stored correctly, the data volumes are really large and the queries to be performed are not the typical traditional queries on several columns with multiple aggregations. ClickHouse is not the solution for this.
On the other hand, if your case is not one of the above, it is quite possible that ClickHouse can help you. Where ClickHouse shines is when you are looking for aggregation over a particular column in large volumes of data.
Good for Applications needing instant insights on large, streaming datasets. Applications processing continuous data streams with low latency. When a multi-cloud, high-availability database is required When NOT to Use Small-scale applications with limited budgets Projects that do not require real-time analytics or distributed scaling Teams without experience in distributed databases and HTAP architectures.
Their MergeTree table engine provide impressive performance for data insert in bulk
Not only data insert but also the way MergeTree engine uses Primary Keys to sort the data and perform data skipping based on the granules its also their secret for ridiculous fast queries
Data compression its also great
They provide especial table engines that allow you to read data directly from other sources like S3
Since its written with C++ you have very granular data types and especial ones like enum, LowCardinality and etc, they save you a lot of storage since are stored as integer values
ClickHouse functions besides the ones that respect ANSI Standards are also awesome and useful
It does not release a patch to have back porting; it just releases a new version and stops support; it's difficult to keep up to that pace.
Support engineers lack expertise, but they seem to be improving organically.
Lacks enterprise CDC capability: Change data capture (CDC) is a process that tracks and records changes made to data in a database and then delivers those changes to other systems in real time.
For enterprise-level backup & restore capability, we had to implement our model via Velero snapshot backup.
[Until it is] supported on AWS ECS containers, I will reserve a higher rating for SingleStore. Right now it works well on EC2 and serves our current purpose, [but] would look forward to seeing SingleStore respond to our urge of feature in a shorter time period with high quality and security.
SingleStore excels in real-time analytics and low-latency transactions, making it ideal for operational analytics and mixed workloads. Snowflake shines in batch analytics and data warehousing with strong scalability for large datasets. SingleStore offers faster data ingestion and query execution for real-time use cases, while Snowflake is better for complex analytical queries on historical data.
The support deep dives into our most complexed queries and bizarre issues that sometimes only we get comparing to other clients. Our special workload (thousands of Kafka pipelines + high concurrency of queries). The response match to the priority of the request, P1 gets immediate return call. Missing features are treated, they become a client request and being added to the roadmap after internal consideration on all client needs and priority. Bugs are patched quite fast, depends on the impact and feasible temporary workarounds. There is no issue that we haven't got a proper answer, resolution or reasoning
We allowed 2-3 months for a thorough evaluation. We saw pretty quickly that we were likely to pick SingleStore, so we ported some of our stored procedures to SingleStore in order to take a deeper look. Two SingleStore people worked closely with us to ensure that we did not have any blocking problems. It all went remarkably smoothly.
ClickHouse outperforms, especially in costs, since its compression/indexing engines are so smart, and even with very low computing power, you can already perform huge analyses of the data.
Greenplum is good in handling very large amount of data. Concurrency in Greenplum was a major problem. Features available in SingleStore like Pipelines and in memory features are not available in Greenplum. Gemfire was not scaling well like SingleStore. Support of both Greenplum and Gemfire was not good. Product team did not help us much like the ones in SingleStore who helped us getting started on our first cluster very fast.
As the overall performance and functionality were expanded, we are able to deliver our data much faster than before, which increases the demand for data.
Metadata is available in the platform by default, like metadata on the pipelines. Also, the information schema has lots of metadata, making it easy to load our assets to the data catalog.