Apache Pig is a programming tool for creating MapReduce programs used in Hadoop.
$0
PostgreSQL
Score 8.9 out of 10
N/A
PostgreSQL (alternately Postgres) is a free and open source object-relational database system boasting over 30 years of active development, reliability, feature robustness, and performance. It supports SQL and is designed to support various workloads flexibly.
N/A
Presto
Score 10.0 out of 10
N/A
Presto is an open source SQL query engine designed to run queries on data stored in Hadoop or in traditional databases.
Teradata supported development of Presto followed the acquisition of Hadapt and Revelytix.
I think Presto is one of the best solutions out there today at the cutting edge for interactive query analysis. One of the challenges is presto is a niche tool for the interactive query use case and doesn't have the knobs and whistles as much as Spark. In the foreseeable future …
Apache Pig is best suited for ETL-based data processes. It is good in performance in handling and analyzing a large amount of data. it gives faster results than any other similar tool. It is easy to implement and any user with some initial training or some prior SQL knowledge can work on it. Apache Pig is proud to have a large community base globally.
PostgreSQL is best used for structured data, and best when following relational database design principles. I would not use PostgreSQL for large unstructured data such as video, images, sound files, xml documents, web-pages, especially if these files have their own highly variable, internal structure.
Presto is for interactive simple queries, where Hive is for reliable processing. If you have a fact-dim join, presto is great..however for fact-fact joins presto is not the solution.. Presto is a great replacement for proprietary technology like Vertica
Linking, embedding links and adding images is easy enough.
Once you have become familiar with the interface, Presto becomes very quick & easy to use (but, you have to practice & repeat to know what you are doing - it is not as intuitive as one would hope).
Organizing & design is fairly simple with click & drag parameters.
Presto was not designed for large fact fact joins. This is by design as presto does not leverage disk and used memory for processing which in turn makes it fast.. However, this is a tradeoff..in an ideal world, people would like to use one system for all their use cases, and presto should get exhaustive by solving this problem.
Resource allocation is not similar to YARN and presto has a priority queue based query resource allocation..so a query that takes long takes longer...this might be alleviated by giving some more control back to the user to define priority/override.
UDF Support is not available in presto. You will have to write your own functions..while this is good for performance, it comes at a huge overhead of building exclusively for presto and not being interoperable with other systems like Hive, SparkSQL etc.
Postgresql is the best tool out there for relational data so I have to give it a high rating when it comes to analytics, data availability and consistency, so on and so forth. SQL is also a relatively consistent language so when it comes to building new tables and loading data in from the OLTP database, there are enough tools where we can perform ETL on a scalable basis.
The data queries are relatively quick for a small to medium sized table. With complex joins, and a wide and deep table however, the performance of the query has room for improvement.
There are several companies that you can contract for technical support, like EnterpriseDB or Percona, both first level in expertise and commitment to the software.
But we do not have contracts with them, we have done all the way from googling to forums, and never have a problem that we cannot resolve or pass around. And for dozens of projects and more than 15 years now.
The online training is request based. Had there been recorded videos available online for potential users to benefit from, I could have rated it higher. The online documentation however is very helpful. The online documentation PDF is downloadable and allows users to pace their own learning. With examples and code snippets, the documentation is great starting point.
Apache Pig might help to start things faster at first and it was one of the best tool years back but it lacks important features that are needed in the data engineering world right now. Pig also has a steeper learning curve since it uses a proprietary language compared to Spark which can be coded with Python, Java.
Although the competition between the different databases is increasingly aggressive in the sense that they provide many improvements, new functionalities, compatibility with complementary components or environments, in some cases it requires that it be followed within the same family of applications that performs the company that develops it and that is not all bad, but being able to adapt or configure different programs, applications or other environments developed by third parties apart is what gives PostgreSQL a certain advantage and this diversification in the components that can be joined with it, is the reason why it is a great option to choose.
Presto is good for a templated design appeal. You cannot be too creative via this interface - but, the layout and options make the finalized visual product appealing to customers. The other design products I use are for different purposes and not really comparable to Presto.
Higher learning curve than other similar technologies so on-boarding new engineers or change ownership of Apache Pig code tends to be a bit of a headache
Once the language is learned and understood it can be relatively straightforward to write simple Pig scripts so development can go relatively quickly with a skilled team
As distributed technologies grow and improve, overall Apache Pig feels left in the dust and is more legacy code to support than something to actively develop with.
Easy to administer so our DevOps team has only ever used minimal time to setup, tune, and maintain.
Easy to interface with so our Engineering team has only ever used minimal time to query or modify the database. Getting the data is straightforward, what we do with it is the bigger concern.