Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
Oracle Exadata
Score 9.4 out of 10
N/A
Oracle Exadata is an enterprise database platform that runs Oracle Database workloads of any scale and criticality with high performance, availability, and security. Exadata’s scale-out design employs optimizations that let transaction processing, analytics, machine learning, and mixed workloads run faster. Consolidating diverse Oracle Database workloads on Exadata platforms in enterprise data centers, Oracle Cloud Infrastructure (OCI), and multicloud environments helps organizations increase…
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
First, get the database on Oracle. If you are in an Oracle stack, it would be much better to use the Oracle products. If you are driving a Ferrari, you wouldn’t put a Mercedes engine in it. If you are writing a query, you cannot rely on other brands. Since I'm an architect, when I look for a product, I look for performance.
The installation is easy because it comes out-of-the-box and you just start using it.
Previous to Oracle Exadata, we were using a normal Oracle RAC service. We were just waiting for this product to come out.
I'm currently writing a data warehouse on Exadata. Before this solution, we were aiming for this to be completed by 8 a.m., when our ETLs would finish. With the help of Exadata's special features, this was reduced to 3 a.m. This solution allows us to bring more data within the same time period. It provides us with more subject areas that provide more reports to our users. Our ETL times reduced to 65%, then to 50%.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
Oracle Exadata Database Machine had the best performance overall hands down. It clearly beat the competition and we were seeing 1000X improvement on SAP HANA. Oracle Exadata Database Machine beat that without us refactoring our code. To achieve that in HANA, we had to refactor the code somewhat. Now this was for our limited POC of 5 use cases. Given the large number of stored procedures we had in Sybase, we need to capture more production metrics but we are seeing incredible performance.
Single support from a single vendor with both machine and database from Oracle, which is costing us less.
With Exadata, we need less technical manpower and less technical support. A business transaction with the integrated and centralized database helps us focus on other business needs.
We don't need to buy additional licenses and Hardware for the next 3 to 5 years.