Confluent Cloud is a cloud-native service for Apache Kafka used to connect and process data in real time with a fully managed data streaming platform. Confluent Platform is the self-managed version.
$385
per month
Databricks Data Intelligence Platform
Score 8.7 out of 10
N/A
Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
Pricing
Confluent
Databricks Data Intelligence Platform
Editions & Modules
Basic
$0
Standard
Starting at ~$385
per month
Enterprise
Starting at ~$1,150
per month
Standard
$0.07
Per DBU
Premium
$0.10
Per DBU
Enterprise
$0.13
Per DBU
Offerings
Pricing Offerings
Confluent
Databricks Data Intelligence Platform
Free Trial
No
No
Free/Freemium Version
Yes
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
Confluent monthly bills are based upon resource consumption, i.e., you are only charged for the resources you use when you actually use them:
Stream: Kafka clusters are billed for eCKUs/CKUs ($/hour), networking ($/GB), and storage ($/GB-hour).
Connect: Use of connectors is billed based on throughput ($/GB) and a task base price ($/task/hour).
Process: Use of stream processing with Confluent Cloud for Apache Flink is calculated based on CFUs ($/minute).
Govern: Use of Stream Governance is billed based on environment ($/hour).
Confluent storage and throughput is calculated in binary gigabytes (GB), where 1 GB is 2^30 bytes. This unit of measurement is also known as a gibibyte (GiB). Please also note that all prices are stated in United States Dollars unless specifically stated otherwise.
All billing computations are conducted in Coordinated Universal Time (UTC).
—
More Pricing Information
Community Pulse
Confluent
Databricks Data Intelligence Platform
Considered Both Products
Confluent
Verified User
Director
Chose Confluent
For our use case it was very important that the technology we were working with fit into our Azure architecture, and met our data processing size requirements to stream data within certain SLAs. Confluent more than met our performance requirements and compared to the others …
If you have a need to stream data, real time or segmented structured data then Confluent is a great platform to do so with. You won't run into packet transfer size limitations that other platforms have. Flexibility in on-prem, cloud, and managed cloud offerings makes it very flexible no matter how you choose to implement.
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
The support from the Confluent platform is great and satisfying. We have been working with Confluent for more than a year now. They sent out resident architects to help us set up Confluent cluster on our cloud and help us troubleshoot problems we have encountered. Overall, it has been a great experience working with the Confluent Platform.
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
For our use case it was very important that the technology we were working with fit into our Azure architecture, and met our data processing size requirements to stream data within certain SLAs. Confluent more than met our performance requirements and compared to the others scale options and cost to run it was more than financially viable as a platform solution to our global operations.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.