What users are saying about
25 Ratings
4 Ratings
25 Ratings
<a href='https://www.trustradius.com/static/about-trustradius-scoring' target='_blank' rel='nofollow'>trScore algorithm: Learn more.</a>
Score 8.2 out of 101
4 Ratings
<a href='https://www.trustradius.com/static/about-trustradius-scoring' target='_blank' rel='nofollow'>trScore algorithm: Learn more.</a>
Score 8.7 out of 101

Add comparison

Likelihood to Recommend

Amazon EMR

Amazon Elastic MapReduce is useful in cases where two conditions are met. First, that you are planning on using multiple big data tools simultaneously to analyze big data sets. And second, that you need a tool that simplifies managing big data tools. If these two conditions are met, MapReduce does a great job. The user interface is simple. The program eliminates some programming requirements. The software also makes setting up big data analyses much easier. With these benefits acknowledged, MapReduce is not a good tool for "small" data analyses, given that there are other tools that do the job quicker and much more professional output. If you're on the fence, try out MapReduce with competing "small" data tools and see if you really need big data software.
Thomas Young profile photo

Apache Sqoop

Sqoop is great for sending data between a JDBC compliant database and a Hadoop environment. Sqoop is built for those who need a few simple CLI options to import a selection of database tables into Hadoop, do large dataset analysis that could not commonly be done with that database system due to resource constraints, then export the results back into that database (or another). Sqoop falls short when there needs to be some extra, customized processing between database extract, and Hadoop loading, in which case Apache Spark's JDBC utilities might be preferred
Jordan Moore profile photo

Pros

  • Amazon Elastic MapReduce works well for managing analyses that use multiple tools, such as Hadoop and Spark. If it were not for the fact that we use multiple tools, there would be less need for MapReduce.
  • MapReduce is always on. I've never had a problem getting data analyses to run on the system. It's simple to set up data mining projects.
  • Amazon Elastic MapReduce has no problems dealing with very large data sets. It processes them just fine. With that said, the outputs don't come instantaneously. It takes time.
Thomas Young profile photo
  • Provides generalized JDBC extensions to migrate data between most database systems
  • Generates Java classes upon reading database records for use in other code utilizing Hadoop's client libraries
  • Allows for both import and export features
Jordan Moore profile photo

Cons

  • Cost overhead is a bit high
  • Limited versions of frameworks that can be used
No photo available
  • Sqoop2 development seems to have stalled. I have set it up outside of a Cloudera CDH installation, and I actually prefer it's "Sqoop Server" model better than just the CLI client version that is Sqoop1. This works especially well in a microservices environment, where there would be only one place to maintain the JDBC drivers to use for Sqoop.
Jordan Moore profile photo

Alternatives Considered

The alternatives to EMR are mainly hadoop distributions owned by the 3 companies above. I have not used the other distributions so it is difficult to comment, but the general tradeoff is, at the cost of a longer setup time and more infra management, you get more flexible versioning and potentially faster access to newer versions of some frameworks such as Spark.
No photo available
  • Sqoop comes preinstalled on the major Hadoop vendor distributions as the recommended product to import data from relational databases. The ability to extend it with additional JDBC drivers makes it very flexible for the environment it is installed within.
  • Spark also has a useful JDBC reader, and can manipulate data in more ways than Sqoop, and also upload to many other systems than just Hadoop.
  • Kafka Connect JDBC is more for streaming database updates using tools such as Oracle GoldenGate or Debezium.
  • Streamsets and Apache NiFi both provide a more "flow based programming" approach to graphically laying out connectors between various systems, including JDBC and Hadoop.
Jordan Moore profile photo

Return on Investment

  • It was easy to set up initial versions of Spark on this
  • Still used as our compute platform as its easy to manage
  • Certain times we forgot to shut down clusters and were overcharged
No photo available
  • When combined with Cloudera's HUE, it can enable non-technical users to easily import relational data into Hadoop.
  • Being able to manipulate large datasets in Hadoop, and them load them into a type of "materialized view" in an external database system has yielded great insights into the Hadoop datalake without continuously running large batch jobs.
  • Sqoop isn't very user-friendly for those uncomfortable with a CLI.
Jordan Moore profile photo

Pricing Details

Amazon EMR

General
Free Trial
Free/Freemium Version
Premium Consulting/Integration Services
Entry-level set up fee?
No
Additional Pricing Details

Apache Sqoop

General
Free Trial
Free/Freemium Version
Premium Consulting/Integration Services
Entry-level set up fee?
No
Additional Pricing Details