Apache Sqoop is a tool for use with Hadoop, used to transfer data between Apache Hadoop and other, structured data stores.
N/A
Azure HDInsight
Score 4.0 out of 10
N/A
HDInsight is an implementation of the Apache Hadoop technology stack on the Microsoft Azure cloud platform: It is based on the Hortonworks Hadoop distribution. Microsoft Azure HDInsight includes implementations of Apache Spark, HBase, Storm, Pig, Hive, Sqoop, Oozie, Ambari, etc. It also integrates with with business intelligence (BI) tools such as Power BI, Excel, SQL Server Analysis Services, and SQL Server Reporting Services.
Sqoop is great for sending data between a JDBC compliant database and a Hadoop environment. Sqoop is built for those who need a few simple CLI options to import a selection of database tables into Hadoop, do large dataset analysis that could not commonly be done with that database system due to resource constraints, then export the results back into that database (or another). Sqoop falls short when there needs to be some extra, customized processing between database extract, and Hadoop loading, in which case Apache Spark's JDBC utilities might be preferred
Well suited: A tiny-mid sized company with no immediate plans of growing the volume of their data processing, that can afford long response times from support. Also it helps if you are not prone to put your hands on Linux and Spark configuration. In fact, it can make things go really faster if you also work with the bundle-in Jupyter. And, if you need to perform some diagnostics and / or administrative tasks, that's full of tools to find an understand the Root Cause. Ideal for non experts. Less appropriate: Big Data company, intense on demand cluster creation, mission critical, costs reduction, latest versions of libraries required, sophisticate customizations required.
Sqoop2 development seems to have stalled. I have set it up outside of a Cloudera CDH installation, and I actually prefer it's "Sqoop Server" model better than just the CLI client version that is Sqoop1. This works especially well in a microservices environment, where there would be only one place to maintain the JDBC drivers to use for Sqoop.
The only problem I have come across is when loading large volumes of data I sometimes get an error message, I assume this means something is corrupt from within. I would love a way for this to be resolved without having to start over.
Azure HDInsight is usable on the top of Azure Data Lake and gives us the benefit of analyzing large scale data workload in Hadoop. Usability and support from Microsoft are outstanding.
Inexpert, isolated teams... not good for support an excessively complex platform. Lots of weeks or months for a complex problem troubleshoot. Many time lost stuck on MindTree, before the case was finally escalated with Microsoft!
Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within.
Spark also has a useful JDBC reader, and can manipulate data in more ways than Sqoop, and also upload to many other systems than just Hadoop.
Kafka Connect JDBC is more for streaming database updates using tools such as Oracle GoldenGate or Debezium.
Streamsets and Apache NiFi both provide a more "flow based programming" approach to graphically laying out connectors between various systems, including JDBC and Hadoop.
At this time I have not used any other similar products... I am open to it but Azure HDInsight and its components really work well for our organization.
When combined with Cloudera's HUE, it can enable non-technical users to easily import relational data into Hadoop.
Being able to manipulate large datasets in Hadoop, and them load them into a type of "materialized view" in an external database system has yielded great insights into the Hadoop datalake without continuously running large batch jobs.
Sqoop isn't very user-friendly for those uncomfortable with a CLI.