Denodo is the eponymous data integration platform from the global company headquartered in Silicon Valley.
N/A
IBM DataStage
Score 7.9 out of 10
N/A
IBM® DataStage® is a data integration tool that helps users to design, develop and run jobs that move and transform data. At its core, the DataStage tool supports extract, transform and load (ETL) and extract, load and transform (ELT) patterns. A basic version of the software is available for on-premises deployment, and the cloud-based DataStage for IBM Cloud Pak® for Data offers automated integration capabilities in a hybrid or multicloud environment.
N/A
Pricing
Denodo
IBM DataStage
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
Denodo
IBM DataStage
Free Trial
No
Yes
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Denodo
IBM DataStage
Features
Denodo
IBM DataStage
Data Source Connection
Comparison of Data Source Connection features of Product A and Product B
Denodo
-
Ratings
IBM DataStage
9.5
10 Ratings
13% above category average
Connect to traditional data sources
00 Ratings
10.010 Ratings
Connecto to Big Data and NoSQL
00 Ratings
9.09 Ratings
Data Transformations
Comparison of Data Transformations features of Product A and Product B
Denodo
-
Ratings
IBM DataStage
8.0
10 Ratings
3% below category average
Simple transformations
00 Ratings
8.010 Ratings
Complex transformations
00 Ratings
8.010 Ratings
Data Modeling
Comparison of Data Modeling features of Product A and Product B
Denodo
-
Ratings
IBM DataStage
6.3
10 Ratings
23% below category average
Data model creation
00 Ratings
5.07 Ratings
Metadata management
00 Ratings
5.09 Ratings
Business rules and workflow
00 Ratings
6.09 Ratings
Collaboration
00 Ratings
6.010 Ratings
Testing and debugging
00 Ratings
6.110 Ratings
Data Governance
Comparison of Data Governance features of Product A and Product B
Denodo allows us to create and combine new views to create a virtual repository and APIs without a single line of code. It is excellent because it can present connectors with a view format for downstream consumers by flattening a JSON file. Reading or connecting to various sources and displaying a tabular view is an excellent feature. The product's technical data catalog is well-organized.
Excellent Cloud data mapping tool and easy creating multiple project data analytics in real-time and the report distribution are excellent via this IBM product. Easy tool to provide data visualization and the integration is effective and helpful to migrating huge amounts of data across other platforms and different websites insights gathering.
Caching - but I am sure it will be improved by now. There were times when we expected the cache to be refreshed but it was stale.
Schema generation of endpoints from API response was sometimes incomplete as not all API calls returned all the fields. Will be good to have an ability to load the schema itself (XSD/JSON/Soap XML etc).
Denodo exposed web services were in preliminary stage when we used; I'm sure it will be improved by now.
Export/Import deployment, while it was helpful, there were unexpected issues without any errors during deployment. Issues were only identified during testing. Some views were not created properly and did not work. If it was working in the environment from where it was exported from, it should work in the environment where it is imported.
Because it is robust, and it is being continuously improved. DS is one of the most used and recognized tools in the market. Large companies have implemented it in the first instance to develop their DW, but finding the advantages it has, they could use it for other types of projects such as migrations, application feeding, etc.
Denodo is a tool to rapidly mash data sources together and create meaningful datasets. It does have its downfalls though. When you create larger, more complex datasets, you will most likely need to cache your datasets, regardless of how proper your joins are set up. Since DV takes data from multiple environments, you are taxing the corporate network, so you need to be conscious of how much data you are sending through the network and truly understand how and when to join datasets due to this.
It could load thousands of records in seconds. But in the Parallel version, you need to understand how to particionate the data. If you use the algorithms erroneously, or the functionalities that it gives for the parsing of data, the performance can fall drastically, even with few records. It is necessary to have people with experience to be able to determine which algorithm to use and understand why.
It's obvious since they both are from the same vendors and it makes it easier and can get better rates for licensing. Also, sales rapes are very helpful in case of escalations and critical issues.