Likelihood to Recommend Sqoop is great for sending data between a JDBC compliant database and a
Hadoop environment. Sqoop is built for those who need a few simple CLI options to import a selection of database tables into
Hadoop , do large dataset analysis that could not commonly be done with that database system due to resource constraints, then export the results back into that database (or another). Sqoop falls short when there needs to be some extra, customized processing between database extract, and
Hadoop loading, in which case
Apache Spark 's JDBC utilities might be preferred
Read full review If you need a managed big data megastore, which has native integration with highly optimized
Apache Spark Engine and native integration with MLflow, go for Databricks Lakehouse Platform. The Databricks Lakehouse Platform is a breeze to use and analytics capabilities are supported out of the box. You will find it a bit difficult to manage code in notebooks but you will get used to it soon.
Read full review Pros Provides generalized JDBC extensions to migrate data between most database systems Generates Java classes upon reading database records for use in other code utilizing Hadoop's client libraries Allows for both import and export features Read full review Process raw data in One Lake (S3) env to relational tables and views Share notebooks with our business analysts so that they can use the queries and generate value out of the data Try out PySpark and Spark SQL queries on raw data before using them in our Spark jobs Modern day ETL operations made easy using Databricks. Provide access mechanism for different set of customers Read full review Cons Sqoop2 development seems to have stalled. I have set it up outside of a Cloudera CDH installation, and I actually prefer it's "Sqoop Server" model better than just the CLI client version that is Sqoop1. This works especially well in a microservices environment, where there would be only one place to maintain the JDBC drivers to use for Sqoop. Read full review Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code). Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally. Visualization in MLFLOW experiment can be enhanced Read full review Usability Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured. in terms of graph generation and interaction it could improve their UI and UX
Read full review Support Rating One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
Read full review Alternatives Considered Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within. Spark also has a useful JDBC reader, and can manipulate data in more ways than Sqoop, and also upload to many other systems than just Hadoop . Kafka Connect JDBC is more for streaming database updates using tools such as Oracle GoldenGate or Debezium. Streamsets and Apache NiFi both provide a more "flow based programming" approach to graphically laying out connectors between various systems, including JDBC and Hadoop . Read full review Compared to
Synapse &
Snowflake , Databricks provides a much better development experience, and deeper configuration capabilities. It works out-of-the-box but still allows you intricate customisation of the environment. I find Databricks very flexible and resilient at the same time while
Synapse and
Snowflake feel more limited in terms of configuration and connectivity to external tools.
Read full review Return on Investment When combined with Cloudera's HUE, it can enable non-technical users to easily import relational data into Hadoop. Being able to manipulate large datasets in Hadoop, and them load them into a type of "materialized view" in an external database system has yielded great insights into the Hadoop datalake without continuously running large batch jobs. Sqoop isn't very user-friendly for those uncomfortable with a CLI. Read full review The ability to spin up a BIG Data platform with little infrastructure overhead allows us to focus on business value not admin DB has the ability to terminate/time out instances which helps manage cost. The ability to quickly access typical hard to build data scenarios easily is a strength. Read full review ScreenShots