AWS Data Exchange is an integration for data service, from which subscribers can easily browse the AWS Data Exchange catalog to find relevant and up-to-date commercial data products covering a wide range of industries, including financial services, healthcare, life sciences, geospatial, consumer, media & entertainment, and more.
N/A
Db2
Score 8.6 out of 10
N/A
DB2 is a family of relational database software solutions offered by IBM. It includes standard Db2 and Db2 Warehouse editions, either deployable on-cloud, or on-premise.
$0
Pricing
AWS Data Exchange
Db2
Editions & Modules
No answers on this topic
Db2 on Cloud Lite
$0
Db2 on Cloud Standard
$99
per month
Db2 Warehouse on Cloud Flex One
$898
per month
Db2 on Cloud Enterprise
$946
per month
Db2 Warehouse on Cloud Flex for AWS
2,957
per month
Db2 Warehouse on Cloud Flex
$3,451
per month
Db2 Warehouse on Cloud Flex Performance
13,651
per month
Db2 Warehouse on Cloud Flex Performance for AWS
13,651
per month
Db2 Standard Edition
Contact Sales
Db2 Advanced Edition
Contact Sales
Offerings
Pricing Offerings
AWS Data Exchange
Db2
Free Trial
No
Yes
Free/Freemium Version
No
Yes
Premium Consulting/Integration Services
No
Yes
Entry-level Setup Fee
No setup fee
Optional
Additional Details
—
—
More Pricing Information
Community Pulse
AWS Data Exchange
Db2
Features
AWS Data Exchange
Db2
Data Source Connection
Comparison of Data Source Connection features of Product A and Product B
AWS Data Exchange
8.0
2 Ratings
3% below category average
Db2
-
Ratings
Connect to traditional data sources
7.02 Ratings
00 Ratings
Connecto to Big Data and NoSQL
9.01 Ratings
00 Ratings
Data Modeling
Comparison of Data Modeling features of Product A and Product B
AWS Data Exchange
8.2
1 Ratings
5% above category average
Db2
-
Ratings
Data model creation
9.01 Ratings
00 Ratings
Metadata management
9.01 Ratings
00 Ratings
Business rules and workflow
7.01 Ratings
00 Ratings
Collaboration
9.01 Ratings
00 Ratings
Testing and debugging
7.01 Ratings
00 Ratings
Data Governance
Comparison of Data Governance features of Product A and Product B
AWS Data Exchange fits best for scenarios where you have datasets that you would like to sell and you want to deliver it to anyone who would like to purchase it. It really beats having to set up downloads via your own website or portal. However, it can get complicated to manage if you're trying to deliver a dataset a client has already paid for.
I have primarily used it as the basis for a SIS - but I have migrated more than a few systems from there database systems to DB2 (Filemaker, MySQL, etc.). DB2 does have a better structural approach, as opposed to Filemaker, which allows for more data consistency, but this can also lead to an inflexibility that can sometimes be counterintuitive when attempting to compensate for the flexibility of the work environment as Schools tend to have an all in one approach.
There have been a lot of problems with ADX. First, the entire system is incredibly clunky from beginning to end.First, by AWS's own admission they're missing a lot of "tablestakes functionality" like the ability to see who is coming to your pages, more flexibility to edit and update your listings, the ability to create a storefront or catalog that actually tries to sell your products. All-in-all you're flying completely blind with AWS. In our convos with other sellers we strongly believe very little organic traffic is flowing through the AWS exchange. For the headache, it's not worth the time or the effort. It's very difficult to market or sell your products.We've also had a number of simple UX bugs where they just don't accurately reflect the attributes of your product. For instance for an S3 bucket they had "+metered costs" displayed to one of our buyers in the price. This of course caused a lot of confusion. They also misrepresented the historical revisions that were available in our product sets because of another UX bug. It's difficult to know what other things in the UX are also broken and incongruent.We also did have a purchase, but the seller is completely at their whim at providing you fake emails, fake company names, fake use cases because AWS hasn't thought through simple workflows like "why even have subscription confirmation if I can fake literally everything about a subscription request." So as a result we're now in an endless, timewasting, unhelpful thread with AWS support trying to get payment. They're confused of what to do and we feel completely lost.Lastly, the AWS team has been abysmal in addressing our concerns. Conversations with them result in a laundry list of excuses of why simple functionalities are so hard (including just having accurate documentation). It was a very frustrating and unproductive call. Our objective of our call was to help us see that ADX is a well-resourced and well-visioned product. Ultimately they couldn't clearly articulate who they built the exchange for both on the seller side and the buyer side.Don't waste your time. This is at best a very foggy experiment. Look at other sellers, they have a lot of free pages to try to get attention, but then have smart tactics to divert transactions away from the ADX. Ultimately, smart move. Why give 8-10% of your cut to a product that is basically bare-bones infrastructure.
The DB2 database is a solid option for our school. We have been on this journey now for 3-4 years so we are still adapting to what it can do. We will renew our use of DB2 because we don’t see. Major need to change. Also, changing a main database in a school environment is a major project, so we’ll avoid that if possible.
You have to be well versed in using the technology, not only from a GUI interface but from a command line interface to successfully use this software to its fullest.
I have never had DB2 go down unexpectedly. It just works solidly every day. When I look at the logs, sometimes DB2 has figured out there was a need to build an index. Instead of waiting for me to do it, the database automatically created the index for me. At my current company, we have had zero issues for the past 8 years. We have upgrade the server 3 times and upgraded the OS each time and the only thing we saw was that DB2 got better and faster. It is simply amazing.
The performances are exceptional if you take care to maintain the database. It is a very powerful tool and at the same time very easy to use. In our installation, we expect a DB machine on the mainframe with access to the database through ODBC connectors directly from branch servers, with fabulous end users experience.
Easily the best product support team. :) Whenever we have questions, they have answered those in a timely manner and we like how they go above and beyond to help.
DB2 was more scalable and easily configurable than other products we evaluated and short listed in terms of functionality and pricing. IBM also had a good demo on premise and provided us a sandbox experience to test out and play with the product and DB2 at that time came out better than other similar products.
By using DB2 only to support my IzPCA activities, my knowledge here is somewhat limited.
Anyway, from what I was able to understand, DB2 is extremely scallable.
Maybe the information below could serve as an example of scalability.
Customer have an huge mainframe environment, 13x z15 CECs, around 80 LPARs, and maybe more than 50 Sysplexes (I am not totally sure about this last figure...)
Today we have 7 IzPCA databases, each one in a distinct Syplex.
Plans are underway to have, at the end, an small LPAR, with only one DB2 sub-system, and with only one database, then transmit the data from a lot of other LPARs, and then process all the data in this only one database.
The IzPCA collect process (read the data received, manipulate it, and insert rows in the tables) today is a huge process, demanding many elapsed hours, and lots of CPU.
Almost 100% of the tables are PBR type, insert jobs run in parallel, but in 4 of the 7 database, it is a really a huge and long process.
Combining the INSERTs loads from the 7 databases in only one will be impossible.......,,,,
But, IzPCA recently introduced a new feature, called "Continuous Collector".
By using that feature, small amounts of data will be transmited to the central LPAR at every 5 minutes (or even less), processed immediately,in a short period of time, and withsmall use of CPU, instead of one or two transmissions by day, of very large amounts of data and the corresponding collect jobs occurring only once or twice a day, with long elapsed times, and huge comsumption of CPU
I suspect the total CPU seconds consumed will be more or less the same in both cases, but in the new method it will occur insmall bursts many times a day!!