Battle-tested-never-data-loss alternative to MySQL
Updated February 14, 2020

Battle-tested-never-data-loss alternative to MySQL

Anonymous | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User

Overall Satisfaction with PostgreSQL

We in the software engineering department use Postgres to permanently store most of our customer's information that is needed by the app--anywhere from their settings, login information (of course user's passwords are encrypted and salted), to the work they've created in the app. The web app writes to Postgres whenever our users need to update their info and saves their work and reads from the app to display the webpages. What's cool is that Postgres also has great user management, so we also gave read-only access to only a few parts of the database for the finance department who want to know how much the users are using the app to charge them accurately, and also to customer support who wants to see user data in order to help users debug issues.
  • As I mentioned before, Postgres has an incredibly flexible and simple-to-use user/role management system. First, there are users--login information so that you can hand out to individual users. Then, there are roles, which specify read and/or write access to all the tables that you can assign to users. Through this system, you can easily control who can read and update which tables, and the system is very well-tested, so there's no concern with users accessing or writing to data that they shouldn't be unless your Postgres admin really messes up!
  • I could write pages on this and would need to reference the Postgres manual itself to do this justice, but Postgres is dang scalable! There are so many ways to scale it. Postgres has undergone active development by some of the brightest engineers for over 30 years now, and the result is that Postgres has so many ways you can scale it besides just upping the SSD and CPU and memory speed. You can scale reads horizontally through multiple slaves that handle all the reads. You can add highly optimized indices to your tables. You can change columns to JSONB types for super fast JSON queries. You can turn on special caches to bulk writes so they don't overwhelm the disk. Between those three options and other tips and tricks experienced Postgres admins have, you can get a lot out of them. There's a reason Yahoo stuck with Postgres for decades up until their main database even past the point of 4 Petabytes and 10k writes/second!
  • Postgres, simply put, has achieved super-wide industry adoption (6% market share), which means it's really easy to integrate it into your stack and hire knowledgeable developers to service Postgres. All the major database libraries of the common web frameworks that I know are out there (e.g. Rails-ActiveRecord, Spring-Hibernate, Play Scala-Slick) have out-of-the-box deep Postgres support, with no extra configuration needed to get your web app to start reading and writing to Postgres. I also know many universities in the US include Postgres in their curriculum too (e.g. UC Berkeley). It's really easy to hire either new grads or experienced software engineers for positions that require Postgres knowledge.
  • If you are comparing Postgres to MySQL and you want to use JSON, know that Postgres has better performance and features on indexing JSON blobs simply because Postgres beat MySQL to the JSON game by several years. I haven't used MySQL's JSON support before, but that's what my co-workers say (and it's true that Postgres definitely started support MySQL years earlier).
  • If you are comparing Postgres to MySQL, MySQL does have superior write performance. I don't want to get toooo technical here because it involves knowledge of how deep database internals work, but if you most know Uber actually switched from Postgres to MySQL for this exact reason and wrote a great article about why here: https://eng.uber.com/MySQL-migration/.
  • Anecdotally, the Postgres replication process for keeping slaves up to date with the primary is a bit buggy. I say anecdotally because it just happened to us here at my company. A schema update made to the primary didn't make it to a replica for almost a minute and caused probably 50% of the traffic to our website to see 500 internal server error pages for the whole time, and we didn't know why until we dug deep into Postgres logs on that replica.
  • Postgres' migration from 9 to 10 was a disaster. If you want to be on the latest and greatest, which all tech companies should want, migrating your existing database from 9 to 10 was a real pain. Sure, there's a tool to do it for you, but it involved hours of downtime for our mere 4 TB of data. I wish the Postgres maintainers had put more thought into the tool to make it faster or do it bit-by-bit without downtime. And don't get me started on how confusing the configuration for the migration was....
  • The user-role system has saved us tons of time and thus money. As I mentioned in the "Use Case" section, Postgres is not only used by engineering but also finance to measure how much to charge customers and customer support to debug customer issues. Sure, it's not easy for non-technical employees to psql in and view raw tables, but it has saved engineering hundreds of man-hours that would have had to be spent on building equivalent tools to serve finance or customer support.
  • It provides incredibly trustworthy storage for wherever customer data dumped in. In our 6 years of Postgres existence, we have not lost a byte of customer data due to Postgres messing up a transaction or during the multiple times the hard-drives failed (thanks to ACID compliance!).
  • This is less significant, but Postgres is also quite easy to manage (unless you are going above and beyond to squeeze out every last bit of performance). There's not much to configure, and the out of the box settings are quite sane. That has saved us engineers lots of time that would have gone into Postgres administration.
MySQL: As I mentioned before, MySQL has superior write performance. However, Postgres has super read performance and safer ACID transactions, i.e. less potential data loss.
Elasticsearch: we use Elasticsearch to store free-form customer data, but that's a different use-case. Sometimes you'll want to use both, like us.
AWS, Heroku, and Digital Ocean all provide Postgres-as-a-service, where you pretty much never need to administrate it yourself but they do it for you. The Postgres community also has developed awesome and reasonably priced extensions, such as Citus DB and CockroachDB in case you need additional support for running it. If you need documentation, Postgres's docs are super thorough and their official forms are active.

Do you think PostgreSQL delivers good value for the price?

Yes

Are you happy with PostgreSQL's feature set?

Yes

Did PostgreSQL live up to sales and marketing promises?

Yes

Did implementation of PostgreSQL go as expected?

Yes

Would you buy PostgreSQL again?

Yes

Postgres is useful for perhaps 99% of apps that simply need to store user data somewhere and make it quickly retrievable at some later time. If you want to do full-text dynamic JSON searches (e.g. you are building a search engine), perhaps one of the NoSQL databases will serve you better. But regardless, you will probably need to store user data, even if you are building a search engine and storing it in Postgres (or a similar relational database) is much simpler. Postgres is also really good for work in industries where you get audited regularly (e.g. legal or financial) and cannot ever corrupt or lose user data, and that is because Postgres is fully ACID compliant, meaning if Postgres receives an update query, it will ALWAYS execute it even if lightning strikes the server.