The Snowflake Cloud Data Platform is the eponymous data warehouse with, from the company in San Mateo, a cloud and SQL based DW that aims to allow users to unify, integrate, analyze, and share previously siloed data in secure, governed, and compliant ways. With it, users can securely access the Data Cloud to share live data with customers and business partners, and connect with other organizations doing business as data consumers, data providers, and data service providers.
N/A
Pricing
Apache Spark
Snowflake
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
Apache Spark
Snowflake
Free Trial
No
Yes
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Apache Spark
Snowflake
Considered Both Products
Apache Spark
Verified User
Executive
Chose Apache Spark
Databricks uses Spark as a foundation, and is also a great platform. It does bring several add-ons, which we did not feel needed by the time we evaluated - and haven't needed since then. One interesting plus in our opinion was the engineering support, which is great depending …
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
I am over our HR data, and we use Workday for our HR management system. I have a script in place that runs reports on Workday and saves the results as CSVs. I can then use stages in Snowflake to insert these CSVs into Snowflake, then I can insert or truncate and replace these staged tables into a final schema. Then once these are in a schema I can reference them and build out my data models. In addition to ingesting CSVs, Snowflake has the ability to write a CSV file to our Amazon S3 bucket. Ingesting these CSVs, transforming the data, then delivering it to a destination would've involved so much more coding than my current process if we were on any other platform.
Snowflake scales appropriately allowing you to manage expense for peak and off peak times for pulling and data retrieval and data centric processing jobs
Snowflake offers a marketplace solution that allows you to sell and subscribe to different data sources
Snowflake manages concurrency better in our trials than other premium competitors
Snowflake has little to no setup and ramp up time
Snowflake offers online training for various employee types
This tool is very much technical and proper knowledge is required, so mostly you have to hire an IT team.
I wish if various videos could be available for basic quires like its initiation, then I think it would act as a guideline and would help the beginners a lot.
SnowFlake is very cost effective and we also like the fact we can stop, start and spin up additional processing engines as we need to. We also like the fact that it's easy to connect our SQL IDEs to Snowflake and write our queries in the environment that we are used to
The only thing I dislike about spark's usability is the learning curve, there are many actions and transformations, however, its wide-range of uses for ETL processing, facility to integrate and it's multi-language support make this library a powerhouse for your data science solutions. It has especially aided us with its lightning-fast processing times.
The interface is similar to other SQL query systems I've used and is fairly easy to use. My only complaint is the syntax issues. Another thing is that the error messages are not always the easiest thing to understand, especially when you incorporate temp tables. Some of that is to be expected with any new database.
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
We have had terrific experiences with Snowflake support. They have drilled into queries and given us tremendous detail and helpful answers. In one case they even figured out how a particular product was interacting with Snowflake, via its queries, and gave us detail to go back to that product's vendor because the Snowflake support team identified a fault in its operation. We got it solved without lots of back-and-forth or finger-pointing because the Snowflake team gave such detailed information.
All the above systems work quite well on big data transformations whereas Spark really shines with its bigger API support and its ability to read from and write to multiple data sources. Using Spark one can easily switch between declarative versus imperative versus functional type programming easily based on the situation. Also it doesn't need special data ingestion or indexing pre-processing like Presto. Combining it with Jupyter Notebooks (https://github.com/jupyter-incubator/sparkmagic), one can develop the Spark code in an interactive manner in Scala or Python
I have had the experience of using one more database management system at my previous workplace. What Snowflake provides is better user-friendly consoles, suggestions while writing a query, ease of access to connect to various BI platforms to analyze, [and a] more robust system to store a large amount of data. All these functionalities give the better edge to Snowflake.
Faster turn around on feature development, we have seen a noticeable improvement in our agile development since using Spark.
Easy adoption, having multiple departments use the same underlying technology even if the use cases are very different allows for more commonality amongst applications which definitely makes the operations team happy.
Performance, we have been able to make some applications run over 20x faster since switching to Spark. This has saved us time, headaches, and operating costs.
Positive impact: we use Snowflake to track our subscription and payment charges, which we use for internal and investor reporting
Positive impact: 3 times faster query speed compared to Treasure Data means that answers to stakeholders can be delivered quicker by analysts
Positive impact: recommender systems now source their data from Snowflake rather than Spark clusters, improving development speed, and no longer require maintainence of Spark clusters.