Apache Sqoop is a tool for use with Hadoop, used to transfer data between Apache Hadoop and other, structured data stores.
N/A
Oracle Autonomous Data Warehouse
Score 8.3 out of 10
N/A
Oracle Autonomous Data Warehouse is optimized for analytic workloads, including data marts, data warehouses, data lakes, and data lakehouses. With Autonomous Data Warehouse, data scientists, business analysts, and nonexperts can discover business insights using data of any size and type. The solution is built for the cloud and optimized using Oracle Exadata.
N/A
VMware Tanzu Data Services
Score 6.0 out of 10
N/A
Tanzu Data Services is a family of data-driven solutions built to store, process, and query critical data resources in real-time and at massive scale, both on-premises and in the multi-cloud world.
N/A
Pricing
Apache Sqoop
Oracle Autonomous Data Warehouse
Tanzu Data Services (Greenplum, GemFire, RabbitMQ, Tanzu SQL)
Editions & Modules
No answers on this topic
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
Apache Sqoop
Oracle Autonomous Data Warehouse
VMware Tanzu Data Services
Free Trial
No
No
No
Free/Freemium Version
No
No
No
Premium Consulting/Integration Services
No
No
No
Entry-level Setup Fee
No setup fee
No setup fee
No setup fee
Additional Details
—
—
—
More Pricing Information
Community Pulse
Apache Sqoop
Oracle Autonomous Data Warehouse
Tanzu Data Services (Greenplum, GemFire, RabbitMQ, Tanzu SQL)
Best Alternatives
Apache Sqoop
Oracle Autonomous Data Warehouse
Tanzu Data Services (Greenplum, GemFire, RabbitMQ, Tanzu SQL)
Tanzu Data Services (Greenplum, GemFire, RabbitMQ, Tanzu SQL)
Likelihood to Recommend
Apache
Sqoop is great for sending data between a JDBC compliant database and a Hadoop environment. Sqoop is built for those who need a few simple CLI options to import a selection of database tables into Hadoop, do large dataset analysis that could not commonly be done with that database system due to resource constraints, then export the results back into that database (or another). Sqoop falls short when there needs to be some extra, customized processing between database extract, and Hadoop loading, in which case Apache Spark's JDBC utilities might be preferred
II would recommend Oracle Autonomous Data Warehouse to someone looking to fully automate the transferring of data especially in a warehouse scenario though I can see the elasticity of the suite that is offered and can see it is applicable in other scenarios not just warehouses.
If you need to execute ml algorithms, learning techniques, or mathematical calculations on large amounts of heterogeneous data, VMware Tanzu Data Services will be ideal. It will be really simple to set up, particularly if you choose AWS as your integrated cloud provider. However, if you're working with lower data amounts, such as gigabytes, it can be superfluous.
Very easy and fast to load data into the Oracle Autonomous Data Warehouse
Exceptionally fast retrieval of data joining 100 million row table with a billion row table plus the size of the database was reduced by a factor of 10 due to how Oracle store[s] and organise[s] data and indexes.
Flexibility with scaling up and down CPU on the fly when needed, and just stop it when not needed so you don't get charged when it is not running.
It is always patched and always available and you can add storage dynamically as you need it.
Sqoop2 development seems to have stalled. I have set it up outside of a Cloudera CDH installation, and I actually prefer it's "Sqoop Server" model better than just the CLI client version that is Sqoop1. This works especially well in a microservices environment, where there would be only one place to maintain the JDBC drivers to use for Sqoop.
It is very expensive product. But not to mention, there's good reasons why it is expensive.
The product should support more cloud based services. When we made the decision to buy the product (which was 20 years ago,) there was no such thing to consider, but moving to a cloud based data warehouse may promise more scalability, agility, and cost reduction. The new version of Data Warehouse came out on the way, but it looks a bit behind compared to other competitors.
Our healthcare data consists of 30% coded data (such as ICD 10 / SNOMED C,T) but the rests is narrative (such as clinical notes.). Oracle is the best for warehousing standardized data, but not a good choice when considering unstructured data, or a mix of the two.
Does not require continous attention from the DBA, autonomous features allows the database to perform most of the regular admin tasks without need for human intervention.
Allows to integrate multiple data sources on a central data warehouse, and explode the information stored with different analytic and reporting tools.
Understanding Oracle Cloud Infrastructure is really simple, and Autonomous databases are even more. Using shared or dedicated infrastructure is one of the few things you need to consider at the moment of starting provisioning your Oracle Autonomous Data Warehouse.
Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within.
Spark also has a useful JDBC reader, and can manipulate data in more ways than Sqoop, and also upload to many other systems than just Hadoop.
Kafka Connect JDBC is more for streaming database updates using tools such as Oracle GoldenGate or Debezium.
Streamsets and Apache NiFi both provide a more "flow based programming" approach to graphically laying out connectors between various systems, including JDBC and Hadoop.
As I mentioned, I have also worked with Amazon Redshift, but it is not as versatile as Oracle Autonomous Data Warehouse and does not provide a large variety of products. Oracle Autonomous Data Warehouse is also more reliable than Amazon Redshift, hence why I have chosen it
When combined with Cloudera's HUE, it can enable non-technical users to easily import relational data into Hadoop.
Being able to manipulate large datasets in Hadoop, and them load them into a type of "materialized view" in an external database system has yielded great insights into the Hadoop datalake without continuously running large batch jobs.
Sqoop isn't very user-friendly for those uncomfortable with a CLI.
Overall the business objective of all of our clients have been met positively with Oracle Data Warehouse. All of the required analysis the users were able to successfully carry out using the warehouse data.
Using a 3-tier architecture with the Oracle Data Warehouse at the back end the mid-tier has been integrated well. This is big plus in providing the necessary tools for end users of the data warehouse to carry out their analysis.
All of the various BI products (OBIEE, Cognos, etc.) are able to use and exploit the various analytic built-in functionalities of the Oracle Data Warehouse.