Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to runโฆ
$0.07
Per DBU
Treasure Data
Scoreย 8.9ย outย ofย 10
Mid-Size Companies (51-1,000 employees)
Treasure Data is an enterprise customer data platform (CDP) that reclaims customer-centricity in the age of the digital customer. It does this by connecting all data and uniting teams and systems into one customer data platform to power purposeful engagements.
N/A
Pricing
Databricks Data Intelligence Platform
Treasure Data
Editions & Modules
Standard
$0.07
Per DBU
Premium
$0.10
Per DBU
Enterprise
$0.13
Per DBU
No answers on this topic
Offerings
Pricing Offerings
Databricks Data Intelligence Platform
Treasure Data
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
Optional
Additional Details
โ
โ
More Pricing Information
Community Pulse
Databricks Data Intelligence Platform
Treasure Data
Best Alternatives
Databricks Data Intelligence Platform
Treasure Data
Small Businesses
No answers on this topic
Klaviyo
Scoreย 8.8ย outย ofย 10
Medium-sized Companies
Amazon Athena
Scoreย 8.9ย outย ofย 10
Klaviyo
Scoreย 8.8ย outย ofย 10
Enterprises
Amazon Athena
Scoreย 8.9ย outย ofย 10
Bloomreach - The Agentic Platform for Personalization
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
Treasure Data is well suited to integrating multiple data sources, including online and digital sources. It is also well suited to trigger audience activations to known customers based on their online activity, integrating 3rd party data, and activating target audiences to ad platforms.
CDP provides a unified view of data from all touchpoints in the customer journey until a single customer uses the service. This feature is very helpful in making service decisions and direction.
It provides a variety of extensions to bring your data together in one place and helps you do this easily.
Kits provided by Treasure Box provide basic but helpful methods for further development of services.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
I do think that we definitely will be renewing. We are putting major resources, time, and effort into Treasure Data becoming an extension of our organization, in many ways. We are working toward complete synergies with this product and leadership is very excited about the direction we are heading to be completely customer-centric.
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
It's a easy platform to use and give the user detailed logs about what is going on in the workflows, so someone that do not have a lot of experience can start to work with it. And also the master segment usability is awesome, as we can filter a lot of data the way we want.
As treasure data has a 24 hours support, every time we has big issues that impacts the zones, we do have immediatly support from the treasure data team, so I would say that we do not have any issues with availability
Since treasure data has started having a huge amount of data, sometimes we do have problems with the workflows logs because we generate a lot of then. But with integrations I have not to complain, its really easy to integrate with other platforms.
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
The technical team has a good hold on the nuances of the data related to our organization. I have found the online technical support on their site quite responsive including the L1 support. In cases where the L1 team isn't able to resolve, I have found they are prompt in getting the product team's input to get a quick resolution.
I wasnt here at the training in the start, but I had a few training with treasure data for a few functionalities, and they provided me god explanations and great documentations, eve if the project were in beta.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
We chose Treasure Data for the supreme customer service and lack of hidden costs. We don't need to manage any infrastructure or scale anything to meet customer demand. Treasure Data handles everything and makes it easy for us to integrate and focus on the tasks at hand. There may be cheaper options but we do not regret our decision to go with Treasure Data one bit.
We have built and supported our source of truth data tables using Treasure. This forms the foundation of our decision making.
Most of our Tableau data sources are created using a Treasure Data export which is executed by workflows on a daily basis which allows us to have visibility into day to day performance and communicate them to a wide variety of roles.
We load custom data into our Salesforce instance which allows us to trigger certain workflows and build accountability - i.e. a "Sale" will only count once a certain product driven event occurs which comes from data we pipe into Treasure and then into Salesforce.