159 Reviews and Ratings
4 Reviews and Ratings
No answers on this topic
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.Incentivized
What you have are different strategies for data encoding, which makes the process quite flexible, it is perfectly done so that a joint and collaborative work can be carried out, this information analyzed in large quantities, is extremely vital for the company, by giving it the correct and timely readingIncentivized
Rich APIs for data transformation making for very each to transform and prepare data in a distributed environment without worrying about memory issuesFaster in execution times compare to Hadoop and PIG LatinEasy SQL interface to the same data set for people who are comfortable to explore data in a declarative mannerInteroperability between SQL and Scala / Python style of munging dataIncentivized
Ultra fast query results.IN Memory Database.Easy integration to reporting services.Incentivized
Memory management. Very weak on that.PySpark not as robust as scala with spark.spark master HA is needed. Not as HA as it should be.Locality should not be a necessity, but does help improvement. But would prefer no localityIncentivized
Problems Could Be Encountered is particularly pronounced in more complex analyses.Categorical variables are often not precise enoughIncentivized
Capacity of computing data in cluster and fast speed.
The only thing I dislike about spark's usability is the learning curve, there are many actions and transformations, however, its wide-range of uses for ETL processing, facility to integrate and it's multi-language support make this library a powerhouse for your data science solutions. It has especially aided us with its lightning-fast processing times.Incentivized
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.Incentivized
We selected Kognitio because of the legacy systems that are still running. Also, we have legacy systems in place that are fit for Kognitio. End-user has good feedback on our side when we started implementing this solution. Current servers are compatible with Kognitio in place.Incentivized
Business leaders are able to take data driven decisionsBusiness users are able access to data in near real time now . Before using spark, they had to wait for at least 24 hours for data to be availableBusiness is able come up with new product ideasIncentivized
The implementation of the formats to integrate the users we have and the program is also good.I also improve the control of aspects related to the work environmentIncentivized