Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
This tool fits all kinds of organizations and helps to integrate data between many applications. We can use this tool as data integration is a key feature for all organizations. It is also available in the cloud, which makes the integration more seamless. The firm can opt for the required tools when there are no data integration needs.
Talend Data Integration allows us to quickly build data integrations without a tremendous amount of custom coding (some Java and JavaScript knowledge is still required).
I like the UI and it's very intuitive. Jobs are visual, allowing the team members to see the flow of the data, without having to read through the Java code that is generated.
If the team looking to use Apache Spark is not used to debug and tweak settings for jobs to ensure maximum optimizations, it can be frustrating. However, the documentation and the support of the community on the internet can help resolve most issues. Moreover, it is highly configurable and it integrates with different tools (eg: it can be used by dbt core), which increase the scenarios where it can be used
We use Talend Data Integration day in and day out. It is the best and easiest tool to jump on to and use. We can build a basic integration super-fast. We could build basic integrations as fast as within the hour. It is also easy to build transformations and use Java to perform some operations.
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Good support, specially when it relates to PROD environment. The support team has access to the product development team. Things are internally escalated to development team if there is a bug encountered. This helps the customer to get quick fix or patch designed for problem exceptions. I have also seen support showing their willingness to help develop custom connector for a newly available cloud based big data solution
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.
In comparison with the other ETLs I used, Talend is more flexible than Data Services (where you cannot create complex commands). It is similar to Datastage speaking about commands and interfaces. It is more user-friendly than ODI, which has a metadata point of view on its own, while Talend is more classic. It has both on-prem and cloud approaches, while Matillion is only cloud-based.
It’s only been a positive RoI with Talend given we’ve interfaced large datasets between critical on-Prem and cloud-native apps to efficiently run our business operations.