Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
erwin Data Modeler
Score 9.9 out of 10
N/A
erwin Data Modeler by Quest is a data modeling tool used to find, visualize, design, deploy and standardize high-quality enterprise data assets. It can discover and document any data from anywhere for consistency, clarity and artifact reuse across large-scale data integration, master data management, metadata management, Big Data, business intelligence and analytics initiatives, accomplishing this whil esupporting data governance and intelligence efforts.
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
I have had a chance to use few other data modeling tools from Quest and Oracle, but I am most comfortable using erwin Data Modeler. They understand your data modeling needs and have designed the software to give you a feeling of completeness when you are designing a data model.
Reverse Engineering: I love the way we can import an SQL file containing schema meta data and generate ER diagram out of it. This is specifically useful if you are implementing erwin Data Modeler for an existing database.
Forward Engineering: We use this feature very frequently. Where we do database changes in our physical and logical data models and then generate deployment scripts for the changes made.
Physical vs Logical Models: I like to have my database model split into physical and logical models and at the same time still linked to each other. Any changes you make to logical model or physical model shows up in the other.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
I had a lot of experience using erwin Data Modeler for designing data models. I think it's pretty intuitive and easy to use. It has enough features to represent your database requirements in form of a model.
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
CA customer support and our account manager have been able to support us with any issues that we have had, from managing our serial keys to issues we logged tickets to resolve. There are aspects of key management that have made it difficult over the years but support usually has worked with us.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
Not listed, but I've only used alternatives built into something like the Squirrel SQL editor. That one is semi-functional but lacking many features and, in some instances, just plain wrong. The only pro there is that it's freely available and works over ODBC. I've tried some of the other free ones like Creately but didn't have much success.