In addition, it saves them time and as a developer, I am receiving responses from them in faster than we worked only with MySQL.
And I can see that DBA answers to developers contain (most of the time) great solutions
Products that are considered exceptional by their customers based on a variety of criteria win TrustRadius awards. Learn more about the types of TrustRadius awards to make the best purchase decision. More about TrustRadius Awards
Leaving a video review helps other professionals like you evaluate products. Be the first one in your network to record a review of SingleStore, and make your voice heard!
Entry-level set up fee?
- Setup fee optional
- Free Trial
- Free/Freemium Version
- Premium Consulting / Integration Services
- Tech Details
|Deployment Types||On-premise, Software as a Service (SaaS), Cloud, or Web-Based|
- I would love an SDK
- A CLI would be cool
- Beyond that, the nodes work great! I was confused at first but then understood how each plan is different
However, if you database is relatively small (under 1GB) then the value that SingleStore provides may not be there for you. Yes, even 1GB can take 15 minutes to upload sometimes but that is not awful.
- Mixing OLAP and OLTP in the same cluster
- Support and customer success team really is great
- OLAP performance
- Predictable performance scaling with cluster resources
- **Much** better monitoring
- Smaller clusters offerings
- More granular scaling
- Better search features
- Multi language support for search
- Slow query visibility and analysis in real production workload
- Data ingestion from object stores
- Storing data in row and columnar store
- Data ingestion from S3 to singlestoreDB takes 6secs for 550K records with 30 columns
- Data ingestion from JDBC sources
- More robust and easy to use Integration with tools like spark
- More options for setting up and doing PoC with singlestoreDB
The most interesting part is their pipeline function which allows really fast and consistent ingestion of data. And setting up these pipelines is really easy with configurations for sources system creds.
- UI design
- Perfect SQL editor and faster query execution
- Ease of pipeline creation for data loads
- Need to have a place to create an admin user for the first information schema database. Because when we log in, we do not have admin access by default to the system.
- The state of the pipeline is not available if the currently available load is finished, there is extra work to check if all your files in the current load are complete.
- The Dashboard can be a bit more simplified than having a lot of details.
2. The places where it is not suited are, when the data load is very low and there is not much difference achieved while execution of data loads.
- I was amazed at the ability to connect to the cluster with third-party clients, including native vendor-supplied MySQL command line tools.
- The ease of exporting data from an existing MySQL database and loading it into SingleStore is impressive.
- Speed and ease of use. Everything is streamlined for you to hit the ground running with minimal learning curve.
- The mysqldump file required some manual massaging before it could be ingested. This was expected, and I am surprised how little manual modification is needed, but nonetheless -- this could be improved.
- Some GUI database tools (such as MySQL Workbench) have trouble connecting and need additional configuration.
- Faster query result as Relational Database.
- High Availability.
- Load data from one or more data sources.
- Loads external data in real-time.
- Distributed and partition feature with master-Slave architecture.
- Load data from a file that is located on the filesystem.
- Power up legacy database and support massive workload.
- MySql Client can access MemSql with same query experience.
- SingleStore has this unique ingest capability it can do parallel ingest of data from sources like S3, Azure Blob, GCS, Kafka.
- Can be integration with Tableau for Data Visualization.
- There should be powerful Data Visualization.
- There can be a better Query Builder UI.
- No implicit ordering of results by primary key.
- Compatibility with S3 and data formats such as Parquet, JSON, and CSV
- Can be Deployed on a Kubernetes Cluster and can be scaled seamlessly.
- Great UI and support documentation available to look and work around.
- Scale out capability
- Fast data ingestion and queries
- Can be used to lower the latencies of various services
- Did not find support for User Defined Functions.
- It opens a new query result tab and that feels irritating to me personally.
- Lot of RAM required while running Developer instances locally.
- super fast
- execute complex query
- the use of several types of database
- open many SQL Editor in same time
- wizard for object creation (table, view, procedure...)
- link table from other database
• data analysis
• migration preparation
• for data processing•......
- Handing and processing Json data types fastly ( Actually I worked on it particularly )
- It is fast and reliable
- Pay as you use. This is quite a good feature for the customers who have load only during peak hours and doesn't have load during non-peak hours.
- It is easy to access and can be accessed from anywhere without VPN issues as it is a cloud-based solution.
- It is very user friendly and all options are easily navigatable.
- Errors are easily understandable if we go wrong anywhere while doing our operations
- Fast retrieving and reading data with JSON Formats.
- super fast data ingestion and queries
- commonly used formats such as csv, json and parquet are well supported
- MySQL engine allowing the customers also to work easily
- minimum administration needed
- support team is quick and helpful
- Every time i run a new query, it opens a new query result tab
- Lot of RAM required while running Developer instances locally
- limited information on the running queries
- Responsive and intuitive UI.
- Query engine is very fast.
- Very easy to deploy clusters and use the database.
- Execution plan can be shown when the query is running so that the user might get information of their running queries and where to optimize them
- Fast on intense data load
- Few clicks away to have the environment setup
- Easiness/speed to load data from different pipelines
- Great speed on running complex queries
- SQL Editor opens a new result tab on every query run
- Consumption of RAM memory through browser if you have many queries
- Very easy to configure the SingleStore database.
- Loading data was extremely easy and super fast. I loaded millions of records and it took a few seconds.
- SingleStore seems to be an ideal database for real-time dashboards.
- Not too many issues. Only one aggregate function I ran in the database took more than a minute to run, otherwise, performance on queries for millions of records was great.
We offer end to end reporting & analytics services and use SingleStore to power our dashboards and reports.
- Relational online analytical processing (ROLAP).
- SQL completeness (e.g. triggers).
- Performance tuning insides.
- Data replication.
- Small data set processing in memory.
- User defined functions.
- Providing optimization options.
- Real-time calculation.
- Massive data storage.
- Real-time updates and integration.
- Scaling up (adding memory) takes a lot of time.
- Real-time information about lags is limited.
If you're looking only for storage, I would pass.
- Ingesting data from kafka at very high rates.
- Easy to understand performance characteristics.
- Hosted solution is bulletproof and always up.
- Programming languages for ingest logic.
- Ability to run in our own cloud accounts to save transit costs.
Right now, MemSQL is our main DB and serve all of our customers.
We are transferring data with MemSQL pipelines.
In addition we are collaborating multi DBs into clusters. It is very comfortable instead of MySQL
- Query time
- Very easy to MySQL customers to work with
- Easy to transfer data from other data sources
- IN operator at where clause
- Multi key index at columnstore
- Enable to run cross clusters query
2. If you need to improve your queries performance.
3. If your you don't need to create a lot of tables at online flow (means to create a table for another query).
With SingleStore, we are able to replace multiple other databases and started migrating from on-premise to SAAS. As SingleStore provides both scenarios, the migration to the cloud will be a smart and simple one. We now use SingleStore for our data and analytics environment. As we now can use the pipelines towards our data lakes and send data via procedures into our tables, we have achieved the next step in our data management by adding additional metadata during the ingestion. Using text search and the geospatial functions adds more functionality than we had before in our previous databases. The performance and scalability are beyond our expectations.
- The scenario to migrate on-premise and scale up to the cloud as a SAAS.
- Using HTAP means we can now use our database for transactions and analytics.
- The main language is similar to MYSQL, so our organization has already the knowledge to master the system.
- The SingleStore academy is a good program to support our organization to master the environment.
- Performance on large sets of data for analytics.
- Row based and column based can be used and mixed.
- Time series, geo spatial and text search function to add to our platform.
- Scalability, so we can continue to deliver the same performance.
- Does not yet support foreign keys.
- OLAP workloads
- Fast query responses
- Multiple use cases in one single database
- Does not provide adequate support for data discovery apps, i.e. Power BI.
- It would be great to have a native load balancing component for dealing with aggregator failure. Otherwise having a Child Aggregator becomes optional since not all the customers can afford an external balancing solution and does not feel confortable with switching between aggregators manually.
- They used to have certifications and training in development and administration. That is very important to have, since other competitors does provide access to those sort of things and although they have free tutorials/videos, that doesn't provide an in-depth understanding.
It's highly recommended for BI and Analytics use cases though
- The fastest speed for querying compared with traditional relational database
- Support JSON and full text search which can be used by API’s
- Nearly zero admin tasks once it’s running
- You can use it’s data streaming pipeline with Kafka
- It doesn’t provide redistribution when you reach the maximum node capacity
- The Graphical Interface could have a revamp, it’s a little bit laggy
- You cannot run it local so development should be made always using cloud instances
- Processes large amounts of data with very low latency.
- The support department is fantastic.
- The developer experience is lacking. Running developer instances locally requires a lot of RAM.
- Scaling - columnar storage makes it easy to scale without cost or signifigant performance hits
- Speed - where performance is an issue, we can keep the data in RAM
- Simplified management - Love MemSQL's web UI to manage clusters
- I'm fine here - I think Single store is addressing needs (Managablility, Flexible maintenance windows, responsive tech support) nicely
- I suppose reliability is an issue. I see the few causes behind this get fixed quickly and the platform becomes more dependable as it matures
- Ingesting high volume of data in real-time.
- Excellent query performance (response in milliseconds).
- Feature rich, json support, full-text search.
- Minimum administration is needed.
- Cross-cluster queries
- SingleStore DB (formerly MemSQL) procedural SQL syntax can be simplified
- Materialized views
- Implementing recursion in queries
- Tag cloud generator from a full-text index
It supports several deployment options--cloud, Helios, on-premise bare-metal or virtualized. Has drivers for many programming languages like Java and Python, for BI tools like Zoomdata and Tableau.
- High data volume processing.
- Operational data enablement.
- Redistribution when a node is full.
- Split processing data usage from data volume license.