Microsoft's Azure Data Factory is a service built for all data integration needs and skill levels. It is designed to allow the user to easily construct ETL and ELT processes code-free within the intuitive visual environment, or write one's own code. Visually integrate data sources using more than 80 natively built and maintenance-free connectors at no added cost. Focus on data—the serverless integration service does the rest.
N/A
Informatica Cloud Data Quality
Score 6.7 out of 10
N/A
The vendor states that Informatica Data Quality empowers companies to take a holistic approach to managing data quality across the entire organization, and that with Informatica Data Quality, users are able to ensure the success of data-driven digital transformation initiatives and projects across users, types, and scale, while also automating mission-critical tasks.
Informatica is a great product. However, given the Azure ecosystem and the pay-as-you-go model's optimal cost, Azure Data Factory was our choice. Also, it is better on the data ingestion and orchestration side. For complex data transformation, we can consider technologies like …
Best scenario is for ETL process. The flexibility and connectivity is outstanding. For our environment, SAP data connectivity with Azure Data Factory offers very limited features compared to SAP Data Sphere. Due to the limited modelling capacity of the tool, we use Databricks for data modelling and cleaning. Usage of multiple tools could have been avoided if adf has modelling capabilities.
For effective data collaboration, systematic verification of customer information, and address, among others, Informatica Data Quality is a fruitful application to consider. Besides, Informatica Data Quality controls quality through a cleansing process, giving the company a professional outline of candid data profiling and reputable analytics. Finally, Informatica Data Quality allows the simplistic navigation of content, with a dashboard that supports predictability.
The matching algorithms in IDQ are very powerful if you understand the different types that they offer (e.g., Hamming Distance, Jaro, Bigram, etc..). We had to play around with it to see which best suit our own needs of identifying and eliminating duplicate customers. Setting up the whole process (e.g., creating the KeyGenerator Transformation, setting up the matching threshold, etc..) can be somewhat time consuming and a challenge if you don't first standardize your data.
The integration with PowerCenter is great if you have both. You can either import your mappings directly to PowerCenter or to an XML file. The only downside is that some of the transformations are unique to IDQ, so you are not really able to edit them once in PowerCenter.
The standardizer transformation was key in helping us standardize our customer data (e.g., names, addresses, etc..). It was helpful due to having create a reference table containing the standardized value and the associated unstandardized values. What was great was that if you used Informatica Analyst, a business analyst could login and correct any of the values.
Granularity of Errors: Sometimes, Azure Data Factory provides error messages that are too generic or vague for us, making it challenging to pinpoint the exact cause of a pipeline failure. Enhanced error messages with more actionable details would greatly assist us as users in debugging their pipelines.
Pipeline Design UI: In my experience, the visual interface for designing pipelines, especially when dealing with complex workflows or numerous activities, can become cluttered. I think a more intuitive and scalable design interface would improve usability. In my opinion, features like zoom, better alignment tools, or grouping capabilities could make managing intricate designs more manageable.
Native Support: While Azure Data Factory does support incremental data loads, in my experience, the setup can be somewhat manual and complex. I think native and more straightforward support for Change Data Capture, especially from popular databases, would simplify the process of capturing and processing only the changed data, making regular data updates more efficient
As pointed out earlier, due all the robust features IDQ has, our use f the product is successful and stable. IDQ is being used in multiple sources (from CRM application and in batch mode). As this is an iterative process, we are looking to improve our system efficiency using IDQ.
So far product has performed as expected. We were noticing some performance issues, but they were largely Synapse related. This has led to a shift from Synapse to Databricks. Overall this has delayed our analytic platform. Once databricks becomes fully operational, Azure Data Factory will be critical to our environment and future success.
We have not had need to engage with Microsoft much on Azure Data Factory, but they have been responsive and helpful when needed. This being said, we have not had a major emergency or outage requiring their intervention. The score of seven is a representation that they have done well for now, but have not proved out their support for a significant issue
Azure Data Factory helps us automate to schedule jobs as per customer demands to make ETL triggers when the need arises. Anyone can define the workflow with the Azure Data Factory UI designer tool and easily test the systems. It helped us automate the same workflow with programming languages like Python or automation tools like ansible. Numerous options for connectivity be it a database or storage account helps us move data transfer to the cloud or on-premise systems.
IDQ is used by a department at my organisation to ensure and enhance the data quality. The usage was started with address standardization and now it had been brought to altogether a next level of quality check where it fixes duplicates, junk characters, standardize the names, streets, product descriptions. In the past we had issues mainly with duplicate customers and products and this were affecting the sales projection and estimates.