The vendor states that Informatica Data Quality empowers companies to take a holistic approach to managing data quality across the entire organization, and that with Informatica Data Quality, users are able to ensure the success of data-driven digital transformation initiatives and projects across users, types, and scale, while also automating mission-critical tasks.
N/A
SAP Adaptive Server Enterprise (ASE)
Score 8.0 out of 10
N/A
SAP Adaptive Server Enterprise (ASE) is a transactional relational database, boasting fast, reliable online transaction processing (OLTP). SAP ASE is the company's transactional database within the SAP Business Technology Platform portfolio.
N/A
Teradata Vantage
Score 8.1 out of 10
N/A
Teradata Vantage is presented as a modern analytics cloud platform that unifies everything—data lakes, data warehouses, analytics, and new data sources and types. Supports hybrid multi-cloud environments and priced for flexibility, Vantage delivers unlimited intelligence to build the future of business.
Users can deploy Vantage on public clouds (such as AWS, Azure, and GCP), hybrid multi-cloud environments, on-premises with Teradata IntelliFlex, or on commodity hardware with VMware.
$4,800
per month
Pricing
Informatica Cloud Data Quality
SAP Adaptive Server Enterprise (ASE)
Teradata Vantage
Editions & Modules
No answers on this topic
No answers on this topic
Teradata VantageCloud Lake
from $4800
per month
Teradata VantageCloud Enterprise
from $9000
per month
Offerings
Pricing Offerings
Informatica Cloud Data Quality
SAP Adaptive Server Enterprise (ASE)
Teradata Vantage
Free Trial
No
No
Yes
Free/Freemium Version
No
No
No
Premium Consulting/Integration Services
No
No
Yes
Entry-level Setup Fee
No setup fee
No setup fee
Optional
Additional Details
—
—
—
More Pricing Information
Community Pulse
Informatica Cloud Data Quality
SAP Adaptive Server Enterprise (ASE)
Teradata Vantage
Features
Informatica Cloud Data Quality
SAP Adaptive Server Enterprise (ASE)
Teradata Vantage
Data Quality
Comparison of Data Quality features of Product A and Product B
Informatica Cloud Data Quality
8.2
4 Ratings
3% below category average
SAP Adaptive Server Enterprise (ASE)
-
Ratings
Teradata Vantage
-
Ratings
Data source connectivity
8.94 Ratings
00 Ratings
00 Ratings
Data profiling
8.74 Ratings
00 Ratings
00 Ratings
Master data management (MDM) integration
8.24 Ratings
00 Ratings
00 Ratings
Data element standardization
7.14 Ratings
00 Ratings
00 Ratings
Match and merge
7.94 Ratings
00 Ratings
00 Ratings
Address verification
8.44 Ratings
00 Ratings
00 Ratings
Relational Databases
Comparison of Relational Databases features of Product A and Product B
For effective data collaboration, systematic verification of customer information, and address, among others, Informatica Data Quality is a fruitful application to consider. Besides, Informatica Data Quality controls quality through a cleansing process, giving the company a professional outline of candid data profiling and reputable analytics. Finally, Informatica Data Quality allows the simplistic navigation of content, with a dashboard that supports predictability.
We use this for an inbuilt security management system, where it performs well in a scaled setup with a large volume of live data with high availability. Also, the performance is up to the mark for the large statement flow. From a DBA perspective, a lot of parameters need to be fine-tuned for the specific environment needs, which can cause overhead. Expertise is limited, and the learning curve is steep for the SAP ASE.
Teradata Vantage is well suited for large scale ETL pipelines like the ones we developed for anti money laundering risk matrices. It handles heavy joins, aggregations, and transformations on transactional data efficiently. We generate alert variables, adjust for inflation, and monitor establishments monthly with it, all integrated with Python and Control-M for a centralised automation across the company. For less appropriate, I would say that heavy resource demands might slow down experimentation for iterative work.
The matching algorithms in IDQ are very powerful if you understand the different types that they offer (e.g., Hamming Distance, Jaro, Bigram, etc..). We had to play around with it to see which best suit our own needs of identifying and eliminating duplicate customers. Setting up the whole process (e.g., creating the KeyGenerator Transformation, setting up the matching threshold, etc..) can be somewhat time consuming and a challenge if you don't first standardize your data.
The integration with PowerCenter is great if you have both. You can either import your mappings directly to PowerCenter or to an XML file. The only downside is that some of the transformations are unique to IDQ, so you are not really able to edit them once in PowerCenter.
The standardizer transformation was key in helping us standardize our customer data (e.g., names, addresses, etc..). It was helpful due to having create a reference table containing the standardized value and the associated unstandardized values. What was great was that if you used Informatica Analyst, a business analyst could login and correct any of the values.
Teradata is an excellent option but only for a massive amount of data warehousing or analysis. If your data is not that big then it could be a misfit for your company and cost you a lot. The cost associated is quite extensive as compared to some other alternative RDBMS systems available in the market.
Migration of data from Teradata to some other RDBMS systems is quite painful as the transition is not that smooth and you need to follow many steps and even if one of them fails. You need to start from the beginning almost.
Last but not least the UI is pretty outdated and needs a revamp. Though it is simple, it needs to be presented in a much better way and more advanced options need to bee presented on the front page itself.
As pointed out earlier, due all the robust features IDQ has, our use f the product is successful and stable. IDQ is being used in multiple sources (from CRM application and in batch mode). As this is an iterative process, we are looking to improve our system efficiency using IDQ.
Teradata is a mature RDBMS system that expands its functionality towards the current cloud capabilities like object storage and flexible compute scale.
Well-suited in the security domain, high performance, and low latency of the DBMS. In terms of the DBA perspective, a dedicated monitoring tool (Cockpit) helps a lot in terms of managing the database, which helps in identifying bottlenecks during performance issues. Also, it helps us to send custom alerts related to Database activities.
Teradata Vantage allows us to create a scalable infrastructure to support our strategic initiatives. The dedicated compute power ensures reliable performance with isolated workloads and dedicated resources, optimizing workflows for faster, more efficient data transfers. The compute clusters support ETL processes and OSF’s developers and data science team with the flexibility to create self-service analytics, to spin up/down at any time, driving better performance and minimizing costs.
We have meetings at the beginning with the technical team to explain our requirements to them and they were really putting in a lot of effort to come up with a solution which will address all our needs. They implemented the software and also trained a few of our resources on the same too. We can get in touch with them now as well whenever we run into a roadblock but it's very less now.
IDQ is used by a department at my organisation to ensure and enhance the data quality. The usage was started with address standardization and now it had been brought to altogether a next level of quality check where it fixes duplicates, junk characters, standardize the names, streets, product descriptions. In the past we had issues mainly with duplicate customers and products and this were affecting the sales projection and estimates.
Teradata is way ahead of its competitor because of its unique features of ensuring data privacy and data never gets corrupted even in worst case scenario. In most cases, the data corruption is a major issue if left unused and it leads to important data being wiped off which in ideal case should be stored for 3 years
Moving to Teradata in the Cloud-enabled a level of agility that previously didn't exist in the organization. It also enabled a level of analytic competency that was not achievable using other options on the aggressive timeline that was required. We didn't want to settle for reinventing a wheel when we had a super tuned performance capable beast readily available in Teradata. Teradata lets us focus on our business rather than spending money and effort trying to design software or database foundations features on an open source or lower performance platform.