Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
$0.07
Per DBU
Toad Data Point
Score 7.9 out of 10
N/A
Toad Data Point is a cross-platform, self-service, data-integration tool that simplifies data access, preparation and provisioning. It provides data connectivity and desktop data integration, and with the Workbook interface for business users, it provides simple-to-use visual query building and workflow automation.
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
Appropriate for general querying and some DBA work. It's the universal least-offensive solution for most environments - not best of breed, but not subject to unusual/extensive requirements. It just works. On the other hand, some functionality (e.g. data import/export, snippets) are perfunctory and minimal and seem to be either difficult or impossible to automate. If you need to streamline those operations, you'll be forced to rely on third-party solutions that mostly work on top of (instead of with) TOAD.
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
The workflow is a relatively new feature. Quest is adding additional functionality and the workflows are useful now.
Would be nice if the 'Automate' feature was a bit easier to use.
Would be nice if some of the SQL Editor features in the traditional interface worked better in the new workflow interface (although, these are being fixed with each release).
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
I find Toad Data Point easy to use for both the novice and the experienced business analyst. If all you desire is to access data and create spreadsheets...this is a snap. Toad Data Point actually has cool data analysis features built into it. The newer workflow interface makes automating steps a snap
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
It is the least common denominator - not particularly optimized for our environment or workflows.
Hangs or slowdowns add anywhere from 5% - 7% for projects utilizing large/complicated data setts. (This could be due to other IT-imposed constraints and not entirely due to TOAD.)
Trying to perform some operations requires reading documentation and experimenting in order to figure out the TOAD-specific approaches and commands.
It just works (when we understand it). Updates don't break things and things don't suddenly start behaving differently. Best of all, we don't mysteriously lose functionality.