Google's BigQuery is part of the Google Cloud Platform, a database-as-a-service (DBaaS) supporting the querying and rapid analysis of enterprise data.
$6.25
per TiB (after the 1st 1 TiB per month, which is free)
SingleStore
Score 8.3 out of 10
N/A
SingleStore aims to enable organizations to scale from one to one million customers, handling SQL, JSON, full text and vector workloads in one unified platform.
SingleStore has a much lower query latency compared to BigQuery. Thus, we segregate faster tasks to SingleStore, and use BigQuery has our main database to store all historical data.
Reduces database sprawl, ETL costs, infrastructure expenses, etc. Supports horizontal scaling, unlike PostgreSQL & Aurora, and real-time analytics and fast transactions (HTAP), unlike Snowflake & ClickHouse.Handles high-volume workloads with thousands of concurrent queries. No …
We previously used Bigquery for our application, and a single store gave us very good performance over Bigquery. But the comparison is not apples to apples, as Bigquery is more of a data warehousing solution.
SingleStore is eons faster than other database providers, and it absolutely crushes calculations & aggregations. While other providers may have a few quality of life enhancements over SingleStore, the speed benefits of SS far outweigh the cons. At the end of the day, speed …
As I said before, we felt that running queries on BigQuery for every query is really slow, especially from an user point of view. After seeing the drastically improved latency of SingleStore we decided to use to solve this issue. We currently use it to run low volume queries …
SingleStore provides a solution for working with larger amount of data (vs. MySQL) with better performance (vs. BigQuery) without having to preprocess the data (vs. MongoDB), so basically it does better for specific use cases.
Google BigQuery is great for being the central datastore and entry point of data if you're on GCP. It seamlessly integrates with other Google products, meaning you can ingest data from other Google products with ease and little technical knowledge, and all of it is near real-time. Being serverless, BigQuery will scale with you, which means you don't have to worry about contention or spikes in demand/storage. This can, however, mean your costs can run away quickly or mount up at short notice.
Good for Applications needing instant insights on large, streaming datasets. Applications processing continuous data streams with low latency. When a multi-cloud, high-availability database is required When NOT to Use Small-scale applications with limited budgets Projects that do not require real-time analytics or distributed scaling Teams without experience in distributed databases and HTAP architectures.
First and foremost - Google BigQuery is great at quickly analyzing large amounts of data, which helps us understand things like customer behavior or product performance without waiting for a long time.
It is very easy to use. Anyone in our team can easily ask questions about our data using simple language, like asking ChatGPT a question. This means everyone can find important information from our data without needing to be a data expert.
It plays nicely with other tools we use, so we can seamlessly connect it with things like Google Cloud Storage for storing data or Data Studio for creating visual reports. This makes our work smoother and helps us collaborate better across different tasks.
It is challenging to predict costs due to BigQuery's pay-per-query pricing model. User-friendly cost estimation tools, along with improved budget alerting features, could help users better manage and predict expenses.
The BigQuery interface is less intuitive. A more user-friendly interface, enhanced documentation, and built-in tutorial systems could make BigQuery more accessible to a broader audience.
It does not release a patch to have back porting; it just releases a new version and stops support; it's difficult to keep up to that pace.
Support engineers lack expertise, but they seem to be improving organically.
Lacks enterprise CDC capability: Change data capture (CDC) is a process that tracks and records changes made to data in a database and then delivers those changes to other systems in real time.
For enterprise-level backup & restore capability, we had to implement our model via Velero snapshot backup.
We have to use this product as its a 3rd party supplier choice to utilise this product for their data side backend so will not be likely we will move away from this product in the future unless the 3rd party supplier decides to change data vendors.
web UI is easy and convenient. Many RDBMS clients such as aqua data studio, Dbeaver data grid, and others connect. Range of well-documented APIs available. The range of features keeps expanding, increasing similar features to traditional RDBMS such as Oracle and DB2
[Until it is] supported on AWS ECS containers, I will reserve a higher rating for SingleStore. Right now it works well on EC2 and serves our current purpose, [but] would look forward to seeing SingleStore respond to our urge of feature in a shorter time period with high quality and security.
SingleStore excels in real-time analytics and low-latency transactions, making it ideal for operational analytics and mixed workloads. Snowflake shines in batch analytics and data warehousing with strong scalability for large datasets. SingleStore offers faster data ingestion and query execution for real-time use cases, while Snowflake is better for complex analytical queries on historical data.
BigQuery can be difficult to support because it is so solid as a product. Many of the issues you will see are related to your own data sets, however you may see issues importing data and managing jobs. If this occurs, it can be a challenge to get to speak to the correct person who can help you.
The support deep dives into our most complexed queries and bizarre issues that sometimes only we get comparing to other clients. Our special workload (thousands of Kafka pipelines + high concurrency of queries). The response match to the priority of the request, P1 gets immediate return call. Missing features are treated, they become a client request and being added to the roadmap after internal consideration on all client needs and priority. Bugs are patched quite fast, depends on the impact and feasible temporary workarounds. There is no issue that we haven't got a proper answer, resolution or reasoning
We allowed 2-3 months for a thorough evaluation. We saw pretty quickly that we were likely to pick SingleStore, so we ported some of our stored procedures to SingleStore in order to take a deeper look. Two SingleStore people worked closely with us to ensure that we did not have any blocking problems. It all went remarkably smoothly.
PowerBI can connect to GA4 for example but the data processing is more complicated and it takes longer to create dashboards. Azure is great once the data import has been configured but it's not an easy task for small businesses as it is with BigQuery.
Greenplum is good in handling very large amount of data. Concurrency in Greenplum was a major problem. Features available in SingleStore like Pipelines and in memory features are not available in Greenplum. Gemfire was not scaling well like SingleStore. Support of both Greenplum and Gemfire was not good. Product team did not help us much like the ones in SingleStore who helped us getting started on our first cluster very fast.
Google Support has kindly provide individual support and consultants to assist with the integration work. In the circumstance where the consultants are not present to support with the work, Google Support Helpline will always be available to answer to the queries without having to wait for more than 3 days.
Previously, running complex queries on our on-premise data warehouse could take hours. Google BigQuery processes the same queries in minutes. We estimate it saves our team at least 25% of their time.
We can target our marketing campaigns very easily and understand our customer behaviour. It lets us personalize marketing campaigns and product recommendations and experience at least a 20% improvement in overall campaign performance.
Now, we only pay for the resources we use. Saved $1 million annually on data infrastructure and data storage costs compared to our previous solution.
As the overall performance and functionality were expanded, we are able to deliver our data much faster than before, which increases the demand for data.
Metadata is available in the platform by default, like metadata on the pipelines. Also, the information schema has lots of metadata, making it easy to load our assets to the data catalog.