Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
Matillion
Score 7.1 out of 10
N/A
Matillion is a data pipeline platform used to build and manage pipelines. Matillion empowers data teams with no-code and AI capabilities to be more productive, integrating data wherever it lives and delivering data that’s ready for AI and analytics.
$2.50
Pay as you go per user
Pricing
Databricks Data Intelligence Platform
Matillion
Editions & Modules
Standard
$0.07
Per DBU
Premium
$0.10
Per DBU
Enterprise
$0.13
Per DBU
Developer: For Individuals
$2.50/credit
Pay as you go per user
Basic
$1000
per month 500 prepaid credits (additional credits: $2.18/credit)
Advanced
$2000
per month 750 prepaid credits (additional credits: $2.73/credit)
Enterprise
Request a Quote
Offerings
Pricing Offerings
Databricks Data Intelligence Platform
Matillion
Free Trial
No
Yes
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
Yes
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
Billed directly via cloud marketplace on an hourly basis, with annual subscriptions available depending on the customer's cloud data warehouse provider.
Both the Databricks platform and dbt Cloud are more powerful from the point of view of the development lifecycle and data use cases covered. They are also more complex and require specialized data engineering skills to be used. Matillion has a lower barrier of entry for small …
Overall, Matillion is an excellent choice for businesses that need a powerful and easy-to-use data integration and transformation platform that can handle large volumes of data. While it may be a bit pricey for some organizations, the platform's reliability, scalability, and …
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
In a fast-growing startup-like environment, you’d want a graphical user interface representation of all your data work instead of using tools like Airflow. It’s good to deal with many ad hoc tasks, including in-house and external APIs, data lakes, and data warehouses. It’s also cheaper.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Matillion is brilliant at importing data -- it would be amazing to have more ways to export data, from emailed exports to API pushes.
Any Python that takes more than a few lines of code requires an external server to run it. It would be great to have more integration (perhaps in a connected virtual environment) to easily integrate customized code.
Troubleshooting server logs requires quite a bit of technical expertise. More human readable detailed error handling would be greatly appreciated.
With the current experience of Matillion, we are likely to renew with the current feature option but will also look for improvement in various areas including scalability and dependability. 1. Connectors: It offers various connectors option but isn't full proof which we will be looking forward as we grow. 2. Scalability: As usage increase, we want Matillion system to be more stable.
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
We are able to bring on new resources and teach them how to use Matillion without having to invest a significant amount of time. We prefer looking for resources with any type of ETL skill-set and feel that they can learn Matillion without problem. In addition, the prebuilt objects cover more than 95% of our use cases and we do not have to build much from scratch.
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
Overall, I've found Matillion to be responsive and considerate. I feel like they value us as a customer even when I know they have customers who spend more on the product than we do. That speaks to a motive higher than money. They want to make a good product and a good experience for their customers. If I have any complaint, it's that support sometimes feels community-oriented. It isn't always immediately clear to me that my support requests are going to a support engineer and not to the community at large. Usually, though, after a bit of conversation, it's clear that Matillion is watching and responding. And responses are generally quick in coming.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
Fivetran offers a managed service and pre-configured schemas/models for data loading, which means much less administrative work for initial setup and ongoing maintenance. But it comes at a much higher price tag. So, knowing where your sweet spot is in the build vs. buy spectrum is essential to deciding which tool fits better. For the transformation part, dbt is purely (SQL-) code-based. So, it is mainly whether your developers prefer a GUI or code-based approach.
We're using Matillion on EC2 instances, and we have about 20 projects for our clients in the same instance. Sometimes, we're struggling to manage schedules for all projects because thread management is not visible, and we can't see the process at the instance level.