Microsoft's Azure Data Factory is a service built for all data integration needs and skill levels. It is designed to allow the user to easily construct ETL and ELT processes code-free within the intuitive visual environment, or write one's own code. Visually integrate data sources using more than 80 natively built and maintenance-free connectors at no added cost. Focus on data—the serverless integration service does the rest.
N/A
IBM DataStage
Score 7.7 out of 10
N/A
IBM® DataStage® is a data integration tool that helps users to design, develop and run jobs that move and transform data. At its core, the DataStage tool supports extract, transform and load (ETL) and extract, load and transform (ELT) patterns. A basic version of the software is available for on-premises deployment, and the cloud-based DataStage for IBM Cloud Pak® for Data offers automated integration capabilities in a hybrid or multicloud environment.
The easy integration with other Microsoft software as well as high processing speed, very flexible cost, and high level of security of Microsoft Azure products and services stack up against other similar products.
Best scenario is for ETL process. The flexibility and connectivity is outstanding. For our environment, SAP data connectivity with Azure Data Factory offers very limited features compared to SAP Data Sphere. Due to the limited modelling capacity of the tool, we use Databricks for data modelling and cleaning. Usage of multiple tools could have been avoided if adf has modelling capabilities.
DataStage is somewhat outdated for an ETL. I guess that's what makes it a bit lagged behind its competitors. It can be used for data processing, sure, but its performance seems to be lagging behind or quite slow given the server it is running from. I won’t depend on this application if it's handling a lot of mission-critical banking and business data.
Granularity of Errors: Sometimes, Azure Data Factory provides error messages that are too generic or vague for us, making it challenging to pinpoint the exact cause of a pipeline failure. Enhanced error messages with more actionable details would greatly assist us as users in debugging their pipelines.
Pipeline Design UI: In my experience, the visual interface for designing pipelines, especially when dealing with complex workflows or numerous activities, can become cluttered. I think a more intuitive and scalable design interface would improve usability. In my opinion, features like zoom, better alignment tools, or grouping capabilities could make managing intricate designs more manageable.
Native Support: While Azure Data Factory does support incremental data loads, in my experience, the setup can be somewhat manual and complex. I think native and more straightforward support for Change Data Capture, especially from popular databases, would simplify the process of capturing and processing only the changed data, making regular data updates more efficient
Technical support is a key area IBM should improve for this product. Sometimes our case is assigned to a support engineer and he has no idea of the product or services.
Provide custom reports for datastage jobs and performance such as job history reports, warning messages or error messages.
Make it fully compatible with Oracle and users can direct use of Oracle ODBC drivers instead of Data Direct driver. Same for SQL server.
So far product has performed as expected. We were noticing some performance issues, but they were largely Synapse related. This has led to a shift from Synapse to Databricks. Overall this has delayed our analytic platform. Once databricks becomes fully operational, Azure Data Factory will be critical to our environment and future success.
Because it is robust, and it is being continuously improved. DS is one of the most used and recognized tools in the market. Large companies have implemented it in the first instance to develop their DW, but finding the advantages it has, they could use it for other types of projects such as migrations, application feeding, etc.
It could load thousands of records in seconds. But in the Parallel version, you need to understand how to particionate the data. If you use the algorithms erroneously, or the functionalities that it gives for the parsing of data, the performance can fall drastically, even with few records. It is necessary to have people with experience to be able to determine which algorithm to use and understand why.
We have not had need to engage with Microsoft much on Azure Data Factory, but they have been responsive and helpful when needed. This being said, we have not had a major emergency or outage requiring their intervention. The score of seven is a representation that they have done well for now, but have not proved out their support for a significant issue
IBM offers different levels of support but in my experience being and IBM shop helps to get direct support from more knowledgeable technicians from IBM. Not sure on the cost of having this kind of support, but I know there's also general support and community blogs and websites on the Internet make it easy to troubleshoot issues whenever there's need for that.
Azure Data Factory helps us automate to schedule jobs as per customer demands to make ETL triggers when the need arises. Anyone can define the workflow with the Azure Data Factory UI designer tool and easily test the systems. It helped us automate the same workflow with programming languages like Python or automation tools like ansible. Numerous options for connectivity be it a database or storage account helps us move data transfer to the cloud or on-premise systems.
With effective capabilities and easy to manipulate the features and easy to produce accurate data analytics and the Cloud services Automation, this IBM platform is more reliable and easy to document management. The features on this platform are equipped with excellent big data management and easy to provide accurate data analytics.
It’s hard to say at this point, it delivers, but not quite as I expected. It takes a lot of resources to manage and sort this out (manpower, financial).
Definitely, I don’t have the exact numbers, but given the data it processes, it is A LOT. So props to the developer of this application.
Again, based on my experience, I’d choose other ETL apps if there is one that's more user-friendly.