PostgreSQL (alternately Postgres) is a free and open source object-relational database system boasting over 30 years of active development, reliability, feature robustness, and performance. It supports SQL and is designed to support various workloads flexibly.
N/A
QuestDB
Score 10.0 out of 10
N/A
QuestDB is an open source time series database. It implements SQL and exposes a Postgres wire protocol, a REST API, and supports ingestion with InfluxDB line protocol.
PostgreSQL, unlike other databases, is user-friendly and uses an open-source database. Ideal for relational databases, they can be accessed when speed and efficiency are required. It enables high-availability and disaster recovery replication from instance to instance. PostgreSQL can store data in a JSON format, including hashes, keys, and values. Multi-platform compatibility is also a big selling point. We could, however, use all the DBMS’s cores. While it works well in fast environments, it can be problematic in slower ones or cause multiple master replication.
QuestDB is well suited for any use case where you need to store large amount of data and the performance is the key factor - for both reads and writes. So use cases like market data storage in financial industry, any kind of telemetry, etc.
The stability it offers, its speed of response and its resource management is excellent even in complex database environments and with low-resource machines.
The large amount of resources it has in addition to the many own and third-party tools that are compatible that make productivity greatly increase.
The adaptability in various environments, whether distributed or not, [is a] complete set of configuration options which allows to greatly customize the work configuration according to the needs that are required.
The excellent handling of referential and transactional integrity, its internal security scheme, the ease with which we can create backups are some of the strengths that can be mentioned.
The query syntax for JSON fields is unwieldy when you start getting into complex queries with many joins.
I wish there was a distinction (a flag) you could set for automated scripts vs working in the psql CLI, which would provide an 'Are you sure you want to do X?' type prompt if your query is likely to affect more than a certain number of rows. Especially on updates/deletes. Setting the flag in the headless(scripted) flow would disable the prompt.
Better documentation around JSON and Array aggregation, with more examples of how the data is transformed.
Postgresql is the best tool out there for relational data so I have to give it a high rating when it comes to analytics, data availability and consistency, so on and so forth. SQL is also a relatively consistent language so when it comes to building new tables and loading data in from the OLTP database, there are enough tools where we can perform ETL on a scalable basis.
The data queries are relatively quick for a small to medium sized table. With complex joins, and a wide and deep table however, the performance of the query has room for improvement.
There are several companies that you can contract for technical support, like EnterpriseDB or Percona, both first level in expertise and commitment to the software.
But we do not have contracts with them, we have done all the way from googling to forums, and never have a problem that we cannot resolve or pass around. And for dozens of projects and more than 15 years now.
The online training is request based. Had there been recorded videos available online for potential users to benefit from, I could have rated it higher. The online documentation however is very helpful. The online documentation PDF is downloadable and allows users to pace their own learning. With examples and code snippets, the documentation is great starting point.
Postgres stacks up just [fine] along the other big players in the RDBMS world. It's very popular for a reason. It's very close to MySQL in terms of cost and features - I'd pick either solution and be just as happy. Compared to Oracle it is a MUCH cheaper solution that is just as usable.
We were looking for time series database that will be able to handle L2 market data and came across QuestDB. From the beginning we were impressed how well the QuestDB performs and that it actually significantly outperforms all other open source TSDB on market like InfluxDB, ClickHouse, Timescale, etc. Apart from the excellent performance it is also super easy to use and deploy which makes the experience of using the database very pleasant - we were able to be up and running and storing data within few hours. Topic itself is the QuestDB team that is super responsive on their slack channel and always ready to help with any query. They are constantly improving the product and if there is some missing feature that is blocking you from usage they always try the best to implement such feature asap and release a new version - one of the best support I have ever seen so far in open source community.
The user-role system has saved us tons of time and thus money. As I mentioned in the "Use Case" section, Postgres is not only used by engineering but also finance to measure how much to charge customers and customer support to debug customer issues. Sure, it's not easy for non-technical employees to psql in and view raw tables, but it has saved engineering hundreds of man-hours that would have had to be spent on building equivalent tools to serve finance or customer support.
It provides incredibly trustworthy storage for wherever customer data dumped in. In our 6 years of Postgres existence, we have not lost a byte of customer data due to Postgres messing up a transaction or during the multiple times the hard-drives failed (thanks to ACID compliance!).
This is less significant, but Postgres is also quite easy to manage (unless you are going above and beyond to squeeze out every last bit of performance). There's not much to configure, and the out of the box settings are quite sane. That has saved us engineers lots of time that would have gone into Postgres administration.