Cyral is a cloud-native Security-as-Code solution to protect the modern data layer. It allows engineering teams to observe, secure, and manage data endpoints in a cloud via a sidecar.
$50
per month
Db2
Score 8.9 out of 10
N/A
DB2 is a family of relational database software solutions offered by IBM. It includes standard Db2 and Db2 Warehouse editions, either deployable on-cloud, or on-premise.
$0
Pricing
Cyral
Db2
Editions & Modules
Access
$50
per identity per month
Access & Authorization
custom pricing
per identity per month
Access & Authorization & Assurance
custom pricing
per identity per month
Db2 on Cloud Lite
$0
Db2 on Cloud Standard
$99
per month
Db2 Warehouse on Cloud Flex One
$898
per month
Db2 on Cloud Enterprise
$946
per month
Db2 Warehouse on Cloud Flex for AWS
2,957
per month
Db2 Warehouse on Cloud Flex
$3,451
per month
Db2 Warehouse on Cloud Flex Performance
13,651
per month
Db2 Warehouse on Cloud Flex Performance for AWS
13,651
per month
Db2 Standard Edition
Contact us
Db2 Advanced Edition
Contact us
Offerings
Pricing Offerings
Cyral
Db2
Free Trial
Yes
Yes
Free/Freemium Version
No
Yes
Premium Consulting/Integration Services
No
Yes
Entry-level Setup Fee
No setup fee
Optional
Additional Details
* Cyral defines an identity as any user, app, or other actor who uses or manages a data repository.
For all companies engaged in data processing and business, this is an excellent software to feel much more confident that all your data, analysis, graphics and business analysis, are in complete safety. The whole department is happy to work with the software because it is very easy to use.
DB2 is well suited for high transaction databases and high availability databases. It is an excellent on-premise solution that requires very little administration. I have used it in so many businesses including telecommunications, warehousing, manufacturing, distribution centers, energy utilities and others. I honestly haven't found an area where it didn't do the job needed. The only area that it might have a weakness is large high-performance data warehouses. However, my understanding is that IBM is working to address this so that it can compete with Teradata and others.
DB2 maintains itself very well. The Task Scheduler component of DB2 allows for statistics gathering and reorganization of indexes and tables without user interaction or without specific knowledge of cron or Windows Task Scheduler / Scheduled jobs.
Its use of ASYNC, NEARSYNC, and SYNC HADR (High Availability Disaster Recovery ) models gives you a range of options for maintaining a very high uptime ratio. Failover from PRIMARY to SECONDARY becomes very easy with just a single command or windowed mouse click.
Task Scheduler ( DB2 9.7 and earlier ) allows for jobs to be run within other jobs, and exit and error codes can define what other jobs are run. This allows for ease of maintenance without third party softwares.
Tablespace usage and automatic storage help keep your data segmented while at rest, making partitioning easier.
Ability to run commands via CLI (Command Line Interface) or via Control Center / Data Studio ( DB2 10.x+) makes administration a breeze.
Since our services are running in IBM Kubernetes, using IBM Cloud Databases seem to be the best option. It may provide better performance than other vendors as everything is running in the same cloud. The overall experience so far is good as well.
You have to be well versed in using the technology, not only from a GUI interface but from a command line interface to successfully use this software to its fullest.
Any issues related to DB2's availability are usually resolved easily and fast. We also have a team of dedicated analysts and admins to support the database technically. Once in a while we do request support from IBM for some complex issues that the on premise team can't resolve and the response is usually pretty fast and support is amazing!
The performances are exceptional if you take care to maintain the database. It is a very powerful tool and at the same time very easy to use. In our installation, we expect a DB machine on the mainframe with access to the database through ODBC connectors directly from branch servers, with fabulous end users experience.
Easily the best product support team. :) Whenever we have questions, they have answered those in a timely manner and we like how they go above and beyond to help.
Db2 is one of the best relational databases I’ve used. It has the ability to maintain large amount of data and execution of million transactions in fraction of a second. If you use it properly, an organization can build a database with thousands of tables, and it can provide the exact information for the applications within a short amount of time
By using DB2 only to support my IzPCA activities, my knowledge here is somewhat limited.
Anyway, from what I was able to understand, DB2 is extremely scallable.
Maybe the information below could serve as an example of scalability.
Customer have an huge mainframe environment, 13x z15 CECs, around 80 LPARs, and maybe more than 50 Sysplexes (I am not totally sure about this last figure...)
Today we have 7 IzPCA databases, each one in a distinct Syplex.
Plans are underway to have, at the end, an small LPAR, with only one DB2 sub-system, and with only one database, then transmit the data from a lot of other LPARs, and then process all the data in this only one database.
The IzPCA collect process (read the data received, manipulate it, and insert rows in the tables) today is a huge process, demanding many elapsed hours, and lots of CPU.
Almost 100% of the tables are PBR type, insert jobs run in parallel, but in 4 of the 7 database, it is a really a huge and long process.
Combining the INSERTs loads from the 7 databases in only one will be impossible.......,,,,
But, IzPCA recently introduced a new feature, called "Continuous Collector".
By using that feature, small amounts of data will be transmited to the central LPAR at every 5 minutes (or even less), processed immediately,in a short period of time, and withsmall use of CPU, instead of one or two transmissions by day, of very large amounts of data and the corresponding collect jobs occurring only once or twice a day, with long elapsed times, and huge comsumption of CPU
I suspect the total CPU seconds consumed will be more or less the same in both cases, but in the new method it will occur insmall bursts many times a day!!