Apache Hive is database/data warehouse software that supports data querying and analysis of large datasets stored in the Hadoop distributed file system (HDFS) and other compatible systems, and is distributed under an open source license.
N/A
Apache Spark
Score 9.0 out of 10
N/A
Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.
N/A
Oracle Autonomous Data Warehouse
Score 8.3 out of 10
N/A
Oracle Autonomous Data Warehouse is optimized for analytic workloads, including data marts, data warehouses, data lakes, and data lakehouses. With Autonomous Data Warehouse, data scientists, business analysts, and nonexperts can discover business insights using data of any size and type. The solution is built for the cloud and optimized using Oracle Exadata.
Apache Hive is a query language developed by Facebook to query over a large distributed dataset. Apache is a query engine that runs on top of HDFS, so it utilizes the resources of HDFS Hadoop setup, while Apache Spark is an in memory compute engine, and that's why [it is] much …
Apache Spark is similar in the sense that it too can be used to query and process large amounts of data through its Dataframe interface. Hive is better for short-term querying while Spark is better for persistent and long-term analysis. Another product is Impala. For our …
To query a huge, distributed dataset, Apache Hive was built by Facebook. Unlike Apache Hive, Apache Spark is an in-memory computation engine, which is why it is significantly quicker than Apache Hive at querying large amounts of data. In contrast to Apache HBase, Apache Hive is …
Verified User
Engineer
Chose Apache Hive
Hive and Spark have the same parent company hence they share a lot of common features. Hive follows SQL syntax while Spark has support for RDD, DataFrame API. DataFrame API supports both SQL syntax and has custom functions to perform the same functionality. Spark is faster and …
One of the major advantages of using Presto or the main reason why people use Presto (Teradata) is due to that fact it can support multiple data sources - which is lacking as in the case of Apache Hive. But still, most people who come from a Structured data-based background …
Easy to understand, well supported by the community, good documentation. However, it is possible that SAP Business Warehouse could be a good fit, too, even maybe better. I did not have the chance to try it though. We selected Apache Hive because it was far less expensive and …
Hive was one of the first SQL on Hadoop technologies, and it comes bundled with the main Hadoop distributions of HDP and CDH. Since its release, it has gained good improvements, but selecting the right SQL on Hadoop technology requires a good understanding of the strengths and …
For storing bulk amount of data in a tabular manner, and where there's no need need of primary key, or just in case, if redundant data is received, it will not cause a problem. For small amounts of data, it does run MR, so beware. If your intention is to use it as a …
Apache Pig is probably the most direct technology to compare to Hive and has several different use cases to Hive. If you want to simplify processing tasks that run using MapReduce then Apache Pig may be a better tool for the job. However if you are going to be running many …
Apache Spark is a fast-processing in-memory computing framework. It is 10 times faster than Apache Hadoop. Earlier we were using Apache Hadoop for processing data on the disk but now we are shifted to Apache Spark because of its in-memory computation capability. Also in SAP …
Verified User
Engineer
Chose Apache Spark
Apache Spark has much more better performance and features if we compare with Hive or map/reduce kind of solutions. Spark has many other features for machine learning, streaming.
All the above systems work quite well on big data transformations whereas Spark really shines with its bigger API support and its ability to read from and write to multiple data sources. Using Spark one can easily switch between declarative versus imperative versus functional …
Even with Python, MapReduce is lengthy coding. Combination of Python with Apache Spark will not only shorten the code, but it will effectively increase the speed of algorithms. Occasionally, I use MapReduce, but Apache Spark will replace MapReduce very soon. It has many …
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and …
Apache Pig and Apache Hive provide most of the things spark provide but apache spark has more features like actions and transformations which are easy to code. Spark uses optimization technique as we can select driver program and manipulate DAG (Directed Acyclic Graph) Python …
Spark has primarily replaced my use of writing pure Hadoop MapReduce or Apache Pig jobs for processing data. I like the fact that I can alternate between the main programming languages that I know - Java and Python - and use those to learn the Scala API. Spark also can be …
Software work execution is on a large scale, it is good to use for new projects or organizational changes, data lineage mapping has always been dubious but this one has had good results. You can store and synchronize data from different departments, the storage process can be manual but it is best automated.
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
II would recommend Oracle Autonomous Data Warehouse to someone looking to fully automate the transferring of data especially in a warehouse scenario though I can see the elasticity of the suite that is offered and can see it is applicable in other scenarios not just warehouses.
Apache Hive allows use to write expressive solutions to complex problems thanks to its SQL-like syntax.
Relatively easy to set up and start using.
Very little ramp-up to start using the actual product, documentation is very thorough, there is an active community, and the code base is constantly being improved.
Very easy and fast to load data into the Oracle Autonomous Data Warehouse
Exceptionally fast retrieval of data joining 100 million row table with a billion row table plus the size of the database was reduced by a factor of 10 due to how Oracle store[s] and organise[s] data and indexes.
Flexibility with scaling up and down CPU on the fly when needed, and just stop it when not needed so you don't get charged when it is not running.
It is always patched and always available and you can add storage dynamically as you need it.
It is very expensive product. But not to mention, there's good reasons why it is expensive.
The product should support more cloud based services. When we made the decision to buy the product (which was 20 years ago,) there was no such thing to consider, but moving to a cloud based data warehouse may promise more scalability, agility, and cost reduction. The new version of Data Warehouse came out on the way, but it looks a bit behind compared to other competitors.
Our healthcare data consists of 30% coded data (such as ICD 10 / SNOMED C,T) but the rests is narrative (such as clinical notes.). Oracle is the best for warehousing standardized data, but not a good choice when considering unstructured data, or a mix of the two.
Does not require continous attention from the DBA, autonomous features allows the database to perform most of the regular admin tasks without need for human intervention.
Allows to integrate multiple data sources on a central data warehouse, and explode the information stored with different analytic and reporting tools.
Hive is a very good big data analysis and ad-hoc query platform, which supports scaling also. The BI processes can be easily integrated with Hadoop via the Hive. It can deal with a much larger data set that traditional RDBMS can not. It is a "must-have" component of the big data domain.
If the team looking to use Apache Spark is not used to debug and tweak settings for jobs to ensure maximum optimizations, it can be frustrating. However, the documentation and the support of the community on the internet can help resolve most issues. Moreover, it is highly configurable and it integrates with different tools (eg: it can be used by dbt core), which increase the scenarios where it can be used
Apache Hive is a FOSS project and its open source. We need not definitely comment on anything about the support of open source and its developer community. But, it has got tremendous developer support, awesome documentation. I would justify the fact that much support can be gathered from the community backup.
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Understanding Oracle Cloud Infrastructure is really simple, and Autonomous databases are even more. Using shared or dedicated infrastructure is one of the few things you need to consider at the moment of starting provisioning your Oracle Autonomous Data Warehouse.
Besides Hive, I have used Google BigQuery, which is costly but have very high computation speed. Amazon Redshift is the another product, I used in my recent organisation. Both Redshift and BigQuery are managed solution whereas Hive needs to be managed
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.
As I mentioned, I have also worked with Amazon Redshift, but it is not as versatile as Oracle Autonomous Data Warehouse and does not provide a large variety of products. Oracle Autonomous Data Warehouse is also more reliable than Amazon Redshift, hence why I have chosen it
Overall the business objective of all of our clients have been met positively with Oracle Data Warehouse. All of the required analysis the users were able to successfully carry out using the warehouse data.
Using a 3-tier architecture with the Oracle Data Warehouse at the back end the mid-tier has been integrated well. This is big plus in providing the necessary tools for end users of the data warehouse to carry out their analysis.
All of the various BI products (OBIEE, Cognos, etc.) are able to use and exploit the various analytic built-in functionalities of the Oracle Data Warehouse.