PostgreSQL (alternately Postgres) is a free and open source object-relational database system boasting over 30 years of active development, reliability, feature robustness, and performance. It supports SQL and is designed to support various workloads flexibly.
N/A
Presto
Score 5.3 out of 10
N/A
Presto is an open source SQL query engine designed to run queries on data stored in Hadoop or in traditional databases.
Teradata supported development of Presto followed the acquisition of Hadapt and Revelytix.
PostgreSQL, unlike other databases, is user-friendly and uses an open-source database. Ideal for relational databases, they can be accessed when speed and efficiency are required. It enables high-availability and disaster recovery replication from instance to instance. PostgreSQL can store data in a JSON format, including hashes, keys, and values. Multi-platform compatibility is also a big selling point. We could, however, use all the DBMS’s cores. While it works well in fast environments, it can be problematic in slower ones or cause multiple master replication.
Presto is for interactive simple queries, where Hive is for reliable processing. If you have a fact-dim join, presto is great..however for fact-fact joins presto is not the solution.. Presto is a great replacement for proprietary technology like Vertica
The stability it offers, its speed of response and its resource management is excellent even in complex database environments and with low-resource machines.
The large amount of resources it has in addition to the many own and third-party tools that are compatible that make productivity greatly increase.
The adaptability in various environments, whether distributed or not, [is a] complete set of configuration options which allows to greatly customize the work configuration according to the needs that are required.
The excellent handling of referential and transactional integrity, its internal security scheme, the ease with which we can create backups are some of the strengths that can be mentioned.
Linking, embedding links and adding images is easy enough.
Once you have become familiar with the interface, Presto becomes very quick & easy to use (but, you have to practice & repeat to know what you are doing - it is not as intuitive as one would hope).
Organizing & design is fairly simple with click & drag parameters.
The query syntax for JSON fields is unwieldy when you start getting into complex queries with many joins.
I wish there was a distinction (a flag) you could set for automated scripts vs working in the psql CLI, which would provide an 'Are you sure you want to do X?' type prompt if your query is likely to affect more than a certain number of rows. Especially on updates/deletes. Setting the flag in the headless(scripted) flow would disable the prompt.
Better documentation around JSON and Array aggregation, with more examples of how the data is transformed.
Presto was not designed for large fact fact joins. This is by design as presto does not leverage disk and used memory for processing which in turn makes it fast.. However, this is a tradeoff..in an ideal world, people would like to use one system for all their use cases, and presto should get exhaustive by solving this problem.
Resource allocation is not similar to YARN and presto has a priority queue based query resource allocation..so a query that takes long takes longer...this might be alleviated by giving some more control back to the user to define priority/override.
UDF Support is not available in presto. You will have to write your own functions..while this is good for performance, it comes at a huge overhead of building exclusively for presto and not being interoperable with other systems like Hive, SparkSQL etc.
Postgresql is the best tool out there for relational data so I have to give it a high rating when it comes to analytics, data availability and consistency, so on and so forth. SQL is also a relatively consistent language so when it comes to building new tables and loading data in from the OLTP database, there are enough tools where we can perform ETL on a scalable basis.
The data queries are relatively quick for a small to medium sized table. With complex joins, and a wide and deep table however, the performance of the query has room for improvement.
There are several companies that you can contract for technical support, like EnterpriseDB or Percona, both first level in expertise and commitment to the software.
But we do not have contracts with them, we have done all the way from googling to forums, and never have a problem that we cannot resolve or pass around. And for dozens of projects and more than 15 years now.
The online training is request based. Had there been recorded videos available online for potential users to benefit from, I could have rated it higher. The online documentation however is very helpful. The online documentation PDF is downloadable and allows users to pace their own learning. With examples and code snippets, the documentation is great starting point.
Postgres stacks up just [fine] along the other big players in the RDBMS world. It's very popular for a reason. It's very close to MySQL in terms of cost and features - I'd pick either solution and be just as happy. Compared to Oracle it is a MUCH cheaper solution that is just as usable.
Presto is good for a templated design appeal. You cannot be too creative via this interface - but, the layout and options make the finalized visual product appealing to customers. The other design products I use are for different purposes and not really comparable to Presto.
The user-role system has saved us tons of time and thus money. As I mentioned in the "Use Case" section, Postgres is not only used by engineering but also finance to measure how much to charge customers and customer support to debug customer issues. Sure, it's not easy for non-technical employees to psql in and view raw tables, but it has saved engineering hundreds of man-hours that would have had to be spent on building equivalent tools to serve finance or customer support.
It provides incredibly trustworthy storage for wherever customer data dumped in. In our 6 years of Postgres existence, we have not lost a byte of customer data due to Postgres messing up a transaction or during the multiple times the hard-drives failed (thanks to ACID compliance!).
This is less significant, but Postgres is also quite easy to manage (unless you are going above and beyond to squeeze out every last bit of performance). There's not much to configure, and the out of the box settings are quite sane. That has saved us engineers lots of time that would have gone into Postgres administration.