Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
SAS Data Management
Score 8.0 out of 10
N/A
A suite of solutions for data connectivity, enhanced transformations and robust governance. Solutions provide a unified view of data with access to data across databases, data warehouses and data lakes. Connects with cloud platforms, on-premises systems and multicloud data sources.
N/A
Pricing
Databricks Data Intelligence Platform
SAS Data Management
Editions & Modules
Standard
$0.07
Per DBU
Premium
$0.10
Per DBU
Enterprise
$0.13
Per DBU
No answers on this topic
Offerings
Pricing Offerings
Databricks Data Intelligence Platform
SAS Data Management
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Databricks Data Intelligence Platform
SAS Data Management
Features
Databricks Data Intelligence Platform
SAS Data Management
Data Source Connection
Comparison of Data Source Connection features of Product A and Product B
Databricks Data Intelligence Platform
-
Ratings
SAS Data Management
8.3
10 Ratings
1% below category average
Connect to traditional data sources
00 Ratings
8.610 Ratings
Connecto to Big Data and NoSQL
00 Ratings
8.19 Ratings
Data Transformations
Comparison of Data Transformations features of Product A and Product B
Databricks Data Intelligence Platform
-
Ratings
SAS Data Management
6.7
8 Ratings
20% below category average
Simple transformations
00 Ratings
6.18 Ratings
Complex transformations
00 Ratings
7.48 Ratings
Data Modeling
Comparison of Data Modeling features of Product A and Product B
Databricks Data Intelligence Platform
-
Ratings
SAS Data Management
6.7
8 Ratings
17% below category average
Data model creation
00 Ratings
5.56 Ratings
Metadata management
00 Ratings
7.47 Ratings
Business rules and workflow
00 Ratings
6.67 Ratings
Collaboration
00 Ratings
7.07 Ratings
Testing and debugging
00 Ratings
6.17 Ratings
Data Governance
Comparison of Data Governance features of Product A and Product B
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
When data is in a system that needs a complex transformation to be usable for an average user. Such tasks as data residing in systems that have very different connection speeds. It can be integrated and used together after passing through the SAS Data Integration Studio removing timing issues from the users' worries. A part that is perhaps less appropriate is getting users who are not familiar with the source data to set up the load processes.
SAS/Access is great for manipulating large and complex databases.
SAS/Access makes it easy to format reports and graphics from your data.
Data Management and data storage using the Hadoop environment in SAS/Access allows for rapid analysis and simple programming language for all your data needs.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
The main negative point is the use of a non-standard language for customizations, as well as the poor integration with non-SAS systems. However, there is no doubt that it is a high-performance and powerful product capable of responding optimally to certain requirements.
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
With SAS, you pay a license fee annually to use this product. Support is incredible. You get what you pay for, whether it's SAS forums on the SAS support site, technical support tickets via email or phone calls, or example documentation. It's not open source. It's documented thoroughly, and it works.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
Because of ease of using SAS DI and data processing speed. There were lots of issues with AWS Redshift on cloud environment in terms of making connections with the data sources and while fetching the data we need to write complex queries.