The Dataiku platform unifies data work from analytics to Generative AI. It supports enterprise analytics with visual, cloud-based tooling for data preparation, visualization, and workflow automation.
N/A
Informatica PowerCenter (legacy)
Score 9.1 out of 10
N/A
Informatica PowerCenter was data integration technology designed to form the foundation for data integration initiatives, application migration, or analytics. It is a legacy product.
N/A
Posit
Score 10.0 out of 10
N/A
Posit, formerly RStudio, is a modular data science platform, combining open source and commercial products.
N/A
Pricing
Dataiku
Informatica PowerCenter (legacy)
Posit
Editions & Modules
Discover
Contact sales team
Business
Contact sales team
Enterprise
Contact sales team
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
Dataiku
Informatica PowerCenter (legacy)
Posit
Free Trial
Yes
No
Yes
Free/Freemium Version
Yes
No
Yes
Premium Consulting/Integration Services
No
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Optional
Additional Details
—
—
—
More Pricing Information
Community Pulse
Dataiku
Informatica PowerCenter (legacy)
Posit
Considered Multiple Products
Dataiku
Verified User
Engineer
Chose Dataiku
Open source availability is a critical factor given licensing cost of other platforms and budget reasons. Secondly, the available features in the community version covers most of the use cases, thus making it comparable or even outdo commercial versions of other software. …
Dataiku is an awesome tool for data scientists. It really makes our lives easier. It is also really good for non technical users to see and follow along with the process. I do think that people can fall into the trap of using it without any knowledge at all because so much is automated, but I dont think that is the fault of Dataiku.
1.- Scenaries with poor sources of data is not recomended (Very bad ROI). The solution is for medium-big enterprises with a lot of sources of data and users. 2.- Bank and finance enviroment to integrate differente data form trading, Regulatory reports, decisions makers, fraud and financial crimes because in this kind of scenary the quality of data is the base of the business. 3.- Departments of development and test of applications in enterprises because you can design enviroments, out of the production systems, to development and test the new API's or updateds made.
In my humble opinion, if you are working on something related to Statistics, RStudio is your go-to tool. But if you are looking for something in Machine Learning, look out for Python. The beauty is that there are packages now by which you can write Python/SQL in R. Cross-platform functionality like such makes RStudio way ahead of its competition. A couple of chinks in RStudio armor are very small and can be considered as nagging just for the sake of argument. Other than completely based on programming language, I couldn't find significant drawbacks to using RStudio. It is one of the best free software available in the market at present.
Informatica Powercenter is an innovative software that works with ETL-type data integration. Connectivity to almost all the database systems.
Great documentation and customer support.
It has a various solution to address data quality issues. data masking, data virtualization. It has various supporting tools or MDM, IDQ, Analyst, BigData which can be used to analyze data and correct it.
The support is incredibly professional and helpful, and they often go out of their way to help me when something doesn't work.
The one-click publishing from RStudio Connect is absolutely amazing, and I really like the way that it deploys your exact package versions, because otherwise, you can get in a terrible mess.
Python doesn't feel quite as native as R at the moment but I have definitely deployed stuff in R and Python that works beautifully which is really nice indeed.
The integrated windows of frontend and backend in web applications make it cumbersome for the developer.
When dealing with multiple data flows, it becomes really confusing, though they have introduced a feature (Zones) to cater to this issue.
Bundling, exporting, and importing projects sometimes create issues related to code environment. If the code environment is not available, at least the schema of the flow we should be able to import should be.
There are too many ways to perform the same or similar functions which in turn makes it challenging to trace what a workflow is doing and at which point (ex. sessions can be designed as static or re-usable and the override can occur at the session or workflow, or both which can be counter productive and confusing when troubleshooting).
The power in structured design is a double edged sword. Simple tasks for a POC can become cumbersome. Ex. if you want to move some data to test a process, you first have to create your sources by importing them which means an ODBC connection or similar will need to be configured, you in turn have to develop your targets and all of the essential building blocks before being able to begin actual development. While I am on sources and targets, I think of a table definition as just that and find it counter intuitive to have to design a table as both a source and target and manage them as different objects. It would be more intuitive to have a table definition and its source/target properties defined by where you drag and drop it in the mapping.
There are no checkpoints or data viewer type functions without designing an entire mapping and workflow. If you would like to simply run a job up to a point and check the throughput, an entire mapping needs to be completed and you would workaround this by creating a flat file target.
Python integration is newer and still can be rough, especially with when using virtual environments.
RStudio Connect pricing feels very department focused, not quite an enterprise perspective.
Some of the RStudio packages don't follow conventional development guidelines (API breaking changes with minor version numbers) which can make supporting larger projects over longer timeframes difficult.
There is no viable alternative right now. The toolset is good and the functionality is increasing with every release. It is backed by regular releases and ongoing development by the RStudio team. There is good engagement with RStudio directly when support is required. Also there's a strong and growing community of developers who provide additional support and sample code.
The user experience is very good. Everything feels intuitive and "flows" (sorry excuse the pun) so nicely, and the customization level is also appropriate to the tool. Even as a newer data scientist, it felt easy to use and the explanations/tutorials were very good. The documentation is also at a good level
Positives; - Multi User Development Environment - Speed of transformation - Seamless integration between other Informatica products. Negatives; - There should be less windows to maintain developers' focus while using. You probably need 2 big monitors when you start development with Informatica Power Center. - Oracle Analytical functions should be natively used. - E-LT support as well as ETL support.
For someone who learns how to use the software and picks up on the "language" of R, it's very easy to use. For beginners, it can be hard and might require a course, as well as the appropriate statistical training to understand what packages to use and when
RStudio is very available and cheap to use. It needs to be updated every once in a while, but the updates tend to be quick and they do not hinder my ability to make progress. I have not experienced any RStudio outages, and I have used the application quite a bit for a variety of statistical analyses
PowerCenter is robust and fast, and it does a great job meeting all the needs, not just the most commercially vocal needs. In the hands of an expert power user, you can accomplish almost anything with your data. It is not for new users or intermittent users-- for that the Cloud version is a better fit. Be prepared for costly connectors (priced differently for each source or destination you are working with), and just be planful of your projects so you are not paying for connectors you no longer need or want
The open source user community is friendly, helpful, and responsive, at times even outdoing commercial software vendors. Documentation is also top notch, and usually resolves issues without the need for human interactions. Great product design, with a focus on user experience, also makes platform use intuitive, thus reducing the need for explicit support.
Informatica power center is a leader of the pack of ETL tools and has some great abilities that make it stand out from other ETL tools. It has been a great partner to its clients over a long time so it's definitely dependable. With all the great things about Informatica, it has a bit of tech burden that should be addressed to make it more nimble, reduce the learning curve for new developers, provide better connectivity with visualization tools.
Since R is trendy among statisticians, you can find lots of help from the data science/ stats communities. If you need help with anything related to RStudio or R, google it or search on StackOverflow, you might easily find the solution that you are looking for.
Anaconda is mainly used by professional data scientists who have profound knowledge of Python coding, mainly used for building some new algorithm block or some optimization, then the module will be integrated into the Dataiku pipeline/workflow. While Dataiku can be used by even other kinds of users.
While Talend offers a much more comfortable interface to work with, Informatica's forte is performance. And on that front, Informatica Enterprise Data Integration certainly leaves Talend in the dust. For a more back-end-centric use case, Informatica is certainly the ETL tool of choice. On the other hand, if business users would be using the tool, then Talend would be the preferred tool.
RStudio was provided as the most customizable. It was also strictly the most feature-rich as far as enabling our organization to script, run, and make use of R open-source packages in our data analysis workstreams. It also provided some support for python, which was useful when we had R heavy code with some python threaded in. Overall we picked Rstudio for the features it provided for our data analysis needs and the ability to interface with our existing resources.
RStudio is very scalable as a product. The issue I have is that it doesn't necessarily fit in nicely with the mainly Microsoft environment that everybody else is using. Having RStudio for us means dedicated servers and recruiting staff who know how to manage the environment. This isn't a fault of the product at all, it's just part of the data science landscape that we all have to put up with. Having said that RStudio is absolutely great for running on low spec servers and there are loads of options to handle concurrency, memory use, etc.
The data pipeline automation capability of Informatica means that few resources are needed to pre-process the data that ultimately resides in a Data Warehouse. Once a workflow is implemented, manual intervention is not needed.
PowerCenter did require more resources and time for installation and configuration than was expected/planned for.
The lack of or minimal support of unstructured data means that newer sources of dynamic/changing data cannot be easily processed/transformed through PowerCenter workflows.
Using it for data science in a very big and old company, the most positive impact, from my point of view, has been the ability of spreading data culture across the group. Shortening the path from data to value.
Still it's hard to quantify economic benefits, we are struggling and it's a great point of attention, since splitting out the contribution of the single aspects of a project (and getting the RStudio pie) is complicated.
What is sure is that, in the long run, RStudio is boosting productivity and making the process in which is embedded more efficient (cost reduction).