Apache Sqoop is a tool for use with Hadoop, used to transfer data between Apache Hadoop and other, structured data stores.
N/A
Cloudera Distribution Hadoop (CDH)
Score 6.9 out of 10
N/A
CDH is Cloudera’s 100% open source platform distribution, including Apache Hadoop and built specifically to meet enterprise demands. CDH delivers everything needed for enterprise use right out of the box. By integrating Hadoop with more than a dozen other critical open source projects, Cloudera has created a functionally advanced system that helps you perform end-to-end Big Data workflows.
Sqoop is great for sending data between a JDBC compliant database and a Hadoop environment. Sqoop is built for those who need a few simple CLI options to import a selection of database tables into Hadoop, do large dataset analysis that could not commonly be done with that database system due to resource constraints, then export the results back into that database (or another). Sqoop falls short when there needs to be some extra, customized processing between database extract, and Hadoop loading, in which case Apache Spark's JDBC utilities might be preferred
Cloudera Distribution Hadoop (CDH) does a lot of things really well - especially on the analytical front. That being said the product is quite expensive. There are seemingly numerous applications that do the same thing on the functional level that are much more cost effecient for enterprise teams. If I were recommending this to a colleague I would let them know the product will absolutely be able to get the job done for their use case, but there are more efficient options
Sqoop2 development seems to have stalled. I have set it up outside of a Cloudera CDH installation, and I actually prefer it's "Sqoop Server" model better than just the CLI client version that is Sqoop1. This works especially well in a microservices environment, where there would be only one place to maintain the JDBC drivers to use for Sqoop.
Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within.
Spark also has a useful JDBC reader, and can manipulate data in more ways than Sqoop, and also upload to many other systems than just Hadoop.
Kafka Connect JDBC is more for streaming database updates using tools such as Oracle GoldenGate or Debezium.
Streamsets and Apache NiFi both provide a more "flow based programming" approach to graphically laying out connectors between various systems, including JDBC and Hadoop.
In terms of functionality there's not much difference, both get the job done. Amazon was more cost-efficient for our team, but this could vary depending on the size of the business. One thing I did notice was that Cloudera seemed to management and spit out our deployments faster than AWS.
When combined with Cloudera's HUE, it can enable non-technical users to easily import relational data into Hadoop.
Being able to manipulate large datasets in Hadoop, and them load them into a type of "materialized view" in an external database system has yielded great insights into the Hadoop datalake without continuously running large batch jobs.
Sqoop isn't very user-friendly for those uncomfortable with a CLI.