Apache Sqoop is a tool for use with Hadoop, used to transfer data between Apache Hadoop and other, structured data stores.
N/A
Cloudera Manager
Score 9.9 out of 10
N/A
Cloudera Manager is a management application for Apache Hadoop and the enterprise data hub, from Cloudera. Its automated wizards let users quickly deploy a cluster, no matter what the scale or the deployment environment, complete with intelligent, system-based default settings.
$0.07
per hour CCU
Pricing
Apache Sqoop
Cloudera Manager
Editions & Modules
No answers on this topic
Data Hub
$0.04/CCU
Hourly rate
Data Engineering
$0.07/CCU
Hourly rate
Data Warehouse
$0.07/CCU
Hourly rate
Operational Database
$0.08/CCU
Hourly rate
Flow Management on Data Hub
$0.15/CCU
Hourly rate
Machine Learning
$0.17/CCU
Hourly rate
DataFlow
$0.30/CCU
Hourly rate
Offerings
Pricing Offerings
Apache Sqoop
Cloudera Manager
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
Pricing is per Cloudera Compute Unit (CCU) which is a combination of Core and Memory. CCU prices shown for each service are estimates and may vary depending on actual instance types. The prices reflected do not include infrastructure cost, networking costs, and other related costs which will vary depending on the services you choose and your cloud service provider.
Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within.
Sqoop is great for sending data between a JDBC compliant database and a Hadoop environment. Sqoop is built for those who need a few simple CLI options to import a selection of database tables into Hadoop, do large dataset analysis that could not commonly be done with that database system due to resource constraints, then export the results back into that database (or another). Sqoop falls short when there needs to be some extra, customized processing between database extract, and Hadoop loading, in which case Apache Spark's JDBC utilities might be preferred
It would be suited for customers who feel more comfortable with using a GUI. It is less appropriate for developers or engineers who are comfortable with command line
Sqoop2 development seems to have stalled. I have set it up outside of a Cloudera CDH installation, and I actually prefer it's "Sqoop Server" model better than just the CLI client version that is Sqoop1. This works especially well in a microservices environment, where there would be only one place to maintain the JDBC drivers to use for Sqoop.
Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within.
Spark also has a useful JDBC reader, and can manipulate data in more ways than Sqoop, and also upload to many other systems than just Hadoop.
Kafka Connect JDBC is more for streaming database updates using tools such as Oracle GoldenGate or Debezium.
Streamsets and Apache NiFi both provide a more "flow based programming" approach to graphically laying out connectors between various systems, including JDBC and Hadoop.
I have not used any competitors, such as Hortonworks, because Cloudera Manager just works and meets all my customer's needs. I only have deployed Hadoop using command line, which is not easy to use and manage.
When combined with Cloudera's HUE, it can enable non-technical users to easily import relational data into Hadoop.
Being able to manipulate large datasets in Hadoop, and them load them into a type of "materialized view" in an external database system has yielded great insights into the Hadoop datalake without continuously running large batch jobs.
Sqoop isn't very user-friendly for those uncomfortable with a CLI.
Cloudera Manager has allowed our organization to deploy Apache Hadoop to operations quicker and with less training versus using the command line exclusively.