Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
H2O.ai
Score 6.3 out of 10
N/A
An open-source end-to-end GenAI platform for air-gapped, on-premises or cloud VPC deployments. Users can Query and summarize documents or just chat with local private GPT LLMs using h2oGPT, an Apache V2 open-source project. And the commercially available Enterprise h2oGPTe provides information retrieval on internal data, privately hosts LLMs, and secures data.
N/A
Pricing
Databricks Data Intelligence Platform
H2O.ai
Editions & Modules
Standard
$0.07
Per DBU
Premium
$0.10
Per DBU
Enterprise
$0.13
Per DBU
No answers on this topic
Offerings
Pricing Offerings
Databricks Data Intelligence Platform
H2O.ai
Free Trial
No
No
Free/Freemium Version
No
Yes
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Databricks Data Intelligence Platform
H2O.ai
Considered Both Products
Databricks Data Intelligence Platform
Verified User
Team Lead
Chose Databricks Data Intelligence Platform
Databricks was picked among other competitors. Closest competition in our organization was H2O.ai and Databricks came out to be more useful for ROI and time to market in our internal research. We could have used AWS products, however Databricks notebooks and ability to launch …
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
Most suited if in little time you wanted to build and train a model. Then, H2O makes life very simple. It has support with R, Python and Java, so no programming dependency is required to use it. It's very simple to use. If you want to modify or tweak your ML algorithm then H2O is not suitable. You can't develop a model from scratch.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
Both are open source (though H2O only up to some level). Both comprise of deep learning, but H2O is not focused directly on deep learning, while Tensor Flow has a "laser" focus on deep learning. H2O is also more focused on scalability. H2O should be looked at not as a competitor but rather a complementary tool. The use case is usually not only about the algorithms, but also about the data model and data logistics and accessibility. H2O is more accessible due to its UI. Also, both can be accessed from Python. The community around TensorFlow seems larger than that of H2O.
Positive impact: saving in infrastructure expenses - compared to other bulky tools this costs a fraction
Positive impact: ability to get quick fixes from H2O when problems arise - compared to waiting for several months/years for new releases from other vendors
Positive impact: Access to H2O core team and able to get features that are needed for our business quickly added to the core H2O product