AWS Glue is a managed extract, transform, and load (ETL) service designed to make it easy for customers to prepare and load data for analytics. With it, users can create and run an ETL job in the AWS Management Console. Users point AWS Glue to data stored on AWS, and AWS Glue discovers data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog. Once cataloged, data is immediately searchable, queryable, and available for ETL.
$0.44
billed per second, 1 minute minimum
Informatica PowerCenter
Score 8.0 out of 10
N/A
Informatica PowerCenter is a metadata driven data integration technology designed to form the foundation for data integration initiatives, including analytics and data warehousing, application migration, or consolidation and data governance.
N/A
Pricing
AWS Glue
Informatica PowerCenter
Editions & Modules
per DPU-Hour
$0.44
billed per second, 1 minute minimum
No answers on this topic
Offerings
Pricing Offerings
AWS Glue
Informatica PowerCenter
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
AWS Glue
Informatica PowerCenter
Features
AWS Glue
Informatica PowerCenter
Data Source Connection
Comparison of Data Source Connection features of Product A and Product B
AWS Glue
-
Ratings
Informatica PowerCenter
8.5
18 Ratings
2% above category average
Connect to traditional data sources
00 Ratings
9.018 Ratings
Connecto to Big Data and NoSQL
00 Ratings
8.014 Ratings
Data Transformations
Comparison of Data Transformations features of Product A and Product B
AWS Glue
-
Ratings
Informatica PowerCenter
7.5
18 Ratings
9% below category average
Simple transformations
00 Ratings
8.018 Ratings
Complex transformations
00 Ratings
7.018 Ratings
Data Modeling
Comparison of Data Modeling features of Product A and Product B
AWS Glue
-
Ratings
Informatica PowerCenter
8.2
18 Ratings
3% above category average
Data model creation
00 Ratings
9.015 Ratings
Metadata management
00 Ratings
8.016 Ratings
Business rules and workflow
00 Ratings
9.018 Ratings
Collaboration
00 Ratings
6.116 Ratings
Testing and debugging
00 Ratings
9.017 Ratings
Data Governance
Comparison of Data Governance features of Product A and Product B
One of AWS Glue's most notable features that aid in the creation and transformation of data is its data catalog. Support, scheduling, and the automation of the data schema recognition make it superior to its competitors aside from that. It also integrates perfectly with other AWS tools. The main restriction may be integrated with systems outside of the AWS environment. It functions flawlessly with the current AWS services but not with other goods. Another potential restriction that comes to mind is that glue operates on a spark, which means the engineer needs to be conversant in the language.
1.- Scenaries with poor sources of data is not recomended (Very bad ROI). The solution is for medium-big enterprises with a lot of sources of data and users. 2.- Bank and finance enviroment to integrate differente data form trading, Regulatory reports, decisions makers, fraud and financial crimes because in this kind of scenary the quality of data is the base of the business. 3.- Departments of development and test of applications in enterprises because you can design enviroments, out of the production systems, to development and test the new API's or updateds made.
It is extremely fast, easy, and self-intuitive. Though it is a suite of services, it requires pretty less time to get control over it.
As it is a managed service, one need not take care of a lot of underlying details. The identification of data schema, code generation, customization, and orchestration of the different job components allows the developers to focus on the core business problem without worrying about infrastructure issues.
It is a pay-as-you-go service. So, there is no need to provide any capacity in advance. So, it makes scheduling much easier.
Informatica Powercenter is an innovative software that works with ETL-type data integration. Connectivity to almost all the database systems.
Great documentation and customer support.
It has a various solution to address data quality issues. data masking, data virtualization. It has various supporting tools or MDM, IDQ, Analyst, BigData which can be used to analyze data and correct it.
There are too many ways to perform the same or similar functions which in turn makes it challenging to trace what a workflow is doing and at which point (ex. sessions can be designed as static or re-usable and the override can occur at the session or workflow, or both which can be counter productive and confusing when troubleshooting).
The power in structured design is a double edged sword. Simple tasks for a POC can become cumbersome. Ex. if you want to move some data to test a process, you first have to create your sources by importing them which means an ODBC connection or similar will need to be configured, you in turn have to develop your targets and all of the essential building blocks before being able to begin actual development. While I am on sources and targets, I think of a table definition as just that and find it counter intuitive to have to design a table as both a source and target and manage them as different objects. It would be more intuitive to have a table definition and its source/target properties defined by where you drag and drop it in the mapping.
There are no checkpoints or data viewer type functions without designing an entire mapping and workflow. If you would like to simply run a job up to a point and check the throughput, an entire mapping needs to be completed and you would workaround this by creating a flat file target.
We give 7 rating because of usefulness in AWS world without worrying about infrastructure and services interaction, it’s pretty out of the box gives us the flexibility to interact with them and use them. we take the data source in s3 from external system and then transform it using other AWS services and putting it back for other external services to consume in S3 form.
Positives; - Multi User Development Environment - Speed of transformation - Seamless integration between other Informatica products. Negatives; - There should be less windows to maintain developers' focus while using. You probably need 2 big monitors when you start development with Informatica Power Center. - Oracle Analytical functions should be natively used. - E-LT support as well as ETL support.
PowerCenter is robust and fast, and it does a great job meeting all the needs, not just the most commercially vocal needs. In the hands of an expert power user, you can accomplish almost anything with your data. It is not for new users or intermittent users-- for that the Cloud version is a better fit. Be prepared for costly connectors (priced differently for each source or destination you are working with), and just be planful of your projects so you are not paying for connectors you no longer need or want
Amazon responds in good time once the ticket has been generated but needs to generate tickets frequent because very few sample codes are available, and it's not cover all the scenarios.
Informatica power center is a leader of the pack of ETL tools and has some great abilities that make it stand out from other ETL tools. It has been a great partner to its clients over a long time so it's definitely dependable. With all the great things about Informatica, it has a bit of tech burden that should be addressed to make it more nimble, reduce the learning curve for new developers, provide better connectivity with visualization tools.
AWS Glue is a fully managed ETL service that automates many ETL tasks, making it easier to set AWS Glue simplifies ETL through a visual interface and automated code generation.
While Talend offers a much more comfortable interface to work with, Informatica's forte is performance. And on that front, Informatica Enterprise Data Integration certainly leaves Talend in the dust. For a more back-end-centric use case, Informatica is certainly the ETL tool of choice. On the other hand, if business users would be using the tool, then Talend would be the preferred tool.
The data pipeline automation capability of Informatica means that few resources are needed to pre-process the data that ultimately resides in a Data Warehouse. Once a workflow is implemented, manual intervention is not needed.
PowerCenter did require more resources and time for installation and configuration than was expected/planned for.
The lack of or minimal support of unstructured data means that newer sources of dynamic/changing data cannot be easily processed/transformed through PowerCenter workflows.