Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
Hortonworks Data Platform
Score 5.0 out of 10
N/A
Hortonworks Data Platform (HDP) is an open source framework for distributed storage and processing of large, multi-source data sets. HDP modernizes IT infrastructure and keeps data secure—in the cloud or on-premises—while helping to drive new revenue streams, improve customer experience, and control costs.
Hortonworks merged with Cloudera in eary 2019.
N/A
Pricing
Databricks Data Intelligence Platform
Hortonworks Data Platform
Editions & Modules
Standard
$0.07
Per DBU
Premium
$0.10
Per DBU
Enterprise
$0.13
Per DBU
No answers on this topic
Offerings
Pricing Offerings
Databricks Data Intelligence Platform
Hortonworks Data Platform
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Databricks Data Intelligence Platform
Hortonworks Data Platform
Considered Both Products
Databricks Data Intelligence Platform
Verified User
Engineer
Chose Databricks Data Intelligence Platform
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when …
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
I find HDP easy to use and solves most of the problems for people looking to manage their big data. Evaluating the Hortonworks Data Platform is easy as it is free to download and install in your cluster. Single node cluster available as Sandbox is also easy for POCs.
It does a good job of packaging a lot of big data components into bundles and lets you use the ones you are interested in or need. It supports an extensive list of components which lets us solve many problems.
It provides the ability to manage installations and maintenance using Apache Ambari. It helps us in using management packs to install/upgrade components easily. It also helps us add, remove components, add, remove hosts, perform upgrades in a convenient manner. It also provides alerts and notifications and monitors the environment.
What they excel in is packaging open source components that are relevant and are useful to solve and complement each other as well as contribute to enhancing those components. They do a great job in the community to keep on top of what would be useful to users, fixing bugs and working with other companies and individuals to make the platform better.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Since it doesn't come with propriety tools for big data management, additional integration is need (for query handling, search, etc).
It was very straightforward to store clinical data without relations, such as data from sensors of a medical device. But it has limitations when needed to combine the data with other clinical data in structured format (e.g. lab results, diagnosis).
Overall look and feel of front-end management tools (e.g. monitoring) are not good. It is not bad but it doesn't look professional.
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
We chose [Hortonworks Data Platform] because it's free and because [it] was an IBM partner, suggested as big data platform after biginsights platform.
You can install in more physical computer without high specs, then you can use it in order to learn how to deploy, configure a complete big data cluster.
We installed also in a cloud infrastructure of 5 virtual machine