The HCL Actian Data Platform (formerly Actian Avalanche) hybrid cloud data warehouse is a fully managed service that aims to deliver high performance and scale across all dimensions – data volume, concurrent user, and query complexity – at a lower cost than alternative solutions. Avalanche has built-in self-service data integration that can be deployed on-premises as well as on multiple clouds, including AWS, Azure, and Google Cloud, enabling users to migrate or offload applications and data to…
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
This is a tool geared for smaller to mid-sized business that has disparate sources of data from different platforms in varying incarnations. It’s a great ETL tool to solve the problems a scenario like that causes, but you can also achieve that with good BI Tools like Qlik Sense. So be careful that you really need an ETL tool, as opposed to an end-use tool with a built-in ETL component. If you are going ELT and have a lot of data an not a lot of corporate resources, this is a better option than Microsoft or Informatica
As I said before, more training or greater visibility to training tools/options would be a plus. It’s easy to publish YouTube videos these days, I think they should make more of them.
Differentiation would help, there’s not a lot out there to drive you to buy the product if you are well informed in the market. If you know the market, you steer towards the large or trendy products. It’s a good product, but lost in the noise of the field I think.
Hitching the wagon to a major software brand (like Mule did to Salesforce) would help grow the user base, and thus increase the activity in the support community. More users also translates into product champions.
If the team looking to use Apache Spark is not used to debug and tweak settings for jobs to ensure maximum optimizations, it can be frustrating. However, the documentation and the support of the community on the internet can help resolve most issues. Moreover, it is highly configurable and it integrates with different tools (eg: it can be used by dbt core), which increase the scenarios where it can be used
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.