Azure Synapse Analytics is described as the former Azure SQL Data Warehouse, evolved, and as a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives users the freedom to query data using either serverless or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs.
$4,700
per month 5000 Synapse Commit Units (SCUs)
Db2
Score 8.6 out of 10
N/A
DB2 is a family of relational database software solutions offered by IBM. It includes standard Db2 and Db2 Warehouse editions, either deployable on-cloud, or on-premise.
It's well suited for large, fastly growing, and frequently changing data warehouses (e.g., in startups). It's also suited for companies that want a single, relatively easy-to-use, centralized cloud service for all their data needs. Larger, more structured organizations could still benefit from this service by using Synapse Dedicated SQL Pools, knowing that costs will be much higher than other solutions. I think this product is not suited for smaller, simpler workloads (where an Azure SQL Database and a Data Factory could be enough) or very large scenarios, where it may be better to build custom infrastructure.
I have primarily used it as the basis for a SIS - but I have migrated more than a few systems from there database systems to DB2 (Filemaker, MySQL, etc.). DB2 does have a better structural approach, as opposed to Filemaker, which allows for more data consistency, but this can also lead to an inflexibility that can sometimes be counterintuitive when attempting to compensate for the flexibility of the work environment as Schools tend to have an all in one approach.
Quick to return data. Queries in a SQL data warehouse architecture tend to return data much more quickly than a OLTP setup. Especially with columnar indexes.
Ability to manage extremely large SQL tables. Our databases contain billions of records. This would be unwieldy without a proper SQL datawarehouse
Backup and replication. Because we're already using SQL, moving the data to a datawarehouse makes it easier to manage as our users are already familiar with SQL.
With Azure, it's always the same issue, too many moving parts doing similar things with no specialisation. ADF, Fabric Data Factory and Synapse pipeline serve the same purpose. Same goes for Fabric Warehouse and Synapse SQL pools.
Could do better with serverless workloads considering the competition from databricks and its own fabric warehouse
Synapse pipelines is a replica of Azure Data Factory with no tight integration with Synapse and to a surprise, with missing features from ADF. Integration of warehouse can be improved with in environment ETl tools
The DB2 database is a solid option for our school. We have been on this journey now for 3-4 years so we are still adapting to what it can do. We will renew our use of DB2 because we don’t see. Major need to change. Also, changing a main database in a school environment is a major project, so we’ll avoid that if possible.
The data warehouse portion is very much like old style on-prem SQL server, so most SQL skills one has mastered carry over easily. Azure Data Factory has an easy drag and drop system which allows quick building of pipelines with minimal coding. The Spark portion is the only really complex portion, but if there's an in-house python expert, then the Spark portion is also quiet useable.
You have to be well versed in using the technology, not only from a GUI interface but from a command line interface to successfully use this software to its fullest.
I have never had DB2 go down unexpectedly. It just works solidly every day. When I look at the logs, sometimes DB2 has figured out there was a need to build an index. Instead of waiting for me to do it, the database automatically created the index for me. At my current company, we have had zero issues for the past 8 years. We have upgrade the server 3 times and upgraded the OS each time and the only thing we saw was that DB2 got better and faster. It is simply amazing.
The performances are exceptional if you take care to maintain the database. It is a very powerful tool and at the same time very easy to use. In our installation, we expect a DB machine on the mainframe with access to the database through ODBC connectors directly from branch servers, with fabulous end users experience.
Microsoft does its best to support Synapse. More and more articles are being added to the documentation, providing more useful information on best utilizing its features. The examples provided work well for basic knowledge, but more complex examples should be added to further assist in discovering the vast abilities that the system has.
Easily the best product support team. :) Whenever we have questions, they have answered those in a timely manner and we like how they go above and beyond to help.
In comparing Azure Synapse to the Google BigQuery - the biggest highlight that I'd like to bring forward is Azure Synapse SQL leverages a scale-out architecture in order to distribute computational processing of data across multiple nodes whereas Google BigQuery only takes into account computation and storage.
DB2 was more scalable and easily configurable than other products we evaluated and short listed in terms of functionality and pricing. IBM also had a good demo on premise and provided us a sandbox experience to test out and play with the product and DB2 at that time came out better than other similar products.
By using DB2 only to support my IzPCA activities, my knowledge here is somewhat limited.
Anyway, from what I was able to understand, DB2 is extremely scallable.
Maybe the information below could serve as an example of scalability.
Customer have an huge mainframe environment, 13x z15 CECs, around 80 LPARs, and maybe more than 50 Sysplexes (I am not totally sure about this last figure...)
Today we have 7 IzPCA databases, each one in a distinct Syplex.
Plans are underway to have, at the end, an small LPAR, with only one DB2 sub-system, and with only one database, then transmit the data from a lot of other LPARs, and then process all the data in this only one database.
The IzPCA collect process (read the data received, manipulate it, and insert rows in the tables) today is a huge process, demanding many elapsed hours, and lots of CPU.
Almost 100% of the tables are PBR type, insert jobs run in parallel, but in 4 of the 7 database, it is a really a huge and long process.
Combining the INSERTs loads from the 7 databases in only one will be impossible.......,,,,
But, IzPCA recently introduced a new feature, called "Continuous Collector".
By using that feature, small amounts of data will be transmited to the central LPAR at every 5 minutes (or even less), processed immediately,in a short period of time, and withsmall use of CPU, instead of one or two transmissions by day, of very large amounts of data and the corresponding collect jobs occurring only once or twice a day, with long elapsed times, and huge comsumption of CPU
I suspect the total CPU seconds consumed will be more or less the same in both cases, but in the new method it will occur insmall bursts many times a day!!
Licensing fees is replaced with Azure subscription fee. No big saving there
More visibility into the Azure usage and cost
It can be used a hot storage and old data can be archived to data lake. Real time data integration is possible via external tables and Microsoft Power BI