Apache Spark

Apache Spark

About TrustRadius Scoring
Score 8.7 out of 100
Apache Spark

Overview

Recent Reviews

Apache Spark in Telco

10 out of 10
July 22, 2021
Apache Spark is being widely used within the company. In Advanced Analytics department data engineers and data scientists work closely in …
Continue reading

A powerhouse processing engine.

9 out of 10
September 19, 2020
We use Apache Spark for cluster computing in large-scale data processing, ETL functions, machine learning, as well as for analytics. Its …
Continue reading

Apache Spark Review

7 out of 10
March 16, 2019
We used Apache Spark within our department as a Solution Architecture team. It helped make big data processing more efficient since the …
Continue reading

Reviewer Pros & Cons

View all pros & cons

Video Reviews

Leaving a video review helps other professionals like you evaluate products. Be the first one in your network to record a review of Apache Spark, and make your voice heard!

Pricing

View all pricing
N/A
Unavailable

Sorry, this product's description is unavailable

Entry-level set up fee?

  • No setup fee

Offerings

  • Free Trial
  • Free/Freemium Version
  • Premium Consulting / Integration Services

Would you like us to let the vendor know that you want pricing?

5 people want pricing too

Alternatives Pricing

What is Databricks Lakehouse Platform?

Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data…

Features Scorecard

No scorecards have been submitted for this product yet..

Product Details

What is Apache Spark?

Apache Spark Technical Details

Operating SystemsUnspecified
Mobile ApplicationNo

Comparisons

View all alternatives

Reviews and Ratings

 (147)

Ratings

Reviews

(1-22 of 22)
Companies can't remove reviews or game the system. Here's why
Steven Li | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • DataFrame as a distributed collection of data: easy for developers to implement algorithms and formulas.
  • Calculation in-memory.
  • Cluster to distribute large data of calculation.
  • It would be great if Apache Spark could provide a native database to manage all file info of saved parquet.
Thomas Young | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Apache Spark makes processing very large data sets possible. It handles these data sets in a fairly quick manner.
  • Apache Spark does a fairly good job implementing machine learning models for larger data sets.
  • Apache Spark seems to be a rapidly advancing software, with the new features making the software ever more straight-forward to use.
  • Apache Spark requires some advanced ability to understand and structure the modeling of big data. The software is not user-friendly.
  • The graphics produced by Apache Spark are by no means world-class. They sometimes appear high-schoolish.
  • Apache Spark takes an enormous amount of time to crunch through multiple nodes across very large data sets. Apache Spark could improve this by offering the software in a more interactive programming environment.
Surendranatha Reddy Chappidi | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Spark is very fast compered to other frameworks because it works in cluster mode and use distributed processing and computation frameworks internally
  • Robust and fault tolerant
  • Open source
  • Can source data from multiple data sources
  • No Dataset API support in python version of spark
  • Apache Spark job run UI can have more meaningful information
  • Spark errors can provide more meaningful information when a job is failed
Chetan Munegowda | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Great computing engine for solving complex transformative logic
  • Useful for understanding data and doing data analytical work
  • Gives us a great set of libraries and api to solve day-to-day problems
  • High learning curve
  • Complexity
  • More documentation
  • More developer support
  • More educational videos
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Speed: Apache Spark has great performance for both streaming and batch data
  • Easy to use: the object oriented operators make it easy and intuitive.
  • Multiple language support
  • Fault tolerance
  • Cluster managment
  • Supports DF, DS, and RDDs
  • Hard to learn, documentation could be more in-depth.
  • Due to it's in-memory processing, it can take a large consumption of memory.
  • Poor data visualization, too basic.
Score 8 out of 10
Vetted Review
Verified User
Review Source
  • DataFrames, DataSets, and RDDs.
  • Spark has in-built Machine Learning library which scales and integrates with existing tools.
  • The data processing done by Spark comes at a price of memory blockages, as in-memory capabilities of processing can lead to large consumption of memory.
  • The caching algorithm is not in-built in Spark. We need to manually set up the caching mechanism.
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Very good tool to process big datasets.
  • Inbuilt fault tolerance.
  • Supports multiple languages.
  • Supports advanced analytics.
  • A large number of libraries available -- GraphX, Spark SQL, Spark Streaming, etc.
  • Very slow with smaller amounts of data.
  • Expensive, as it stores data in memory.
March 16, 2019

Apache Spark Review

Score 7 out of 10
Vetted Review
Verified User
Review Source
  • Customizable, it integrates with Jupyter notebooks which was really helpful for our team.
  • Easy to use and implement.
  • It allows us to quickly build microservices.
  • Release cycles can be faster.
  • Sometimes it kicked some of the users out due to inactivity.
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • in memory data engine and hence faster processing
  • does well to lay on top of hadoop file system for big data analytics
  • very good tool for streaming data
  • could do a better job for analytics dashboards to provide insights on a data stream and hence not have to rely on data visualization tools along with spark
  • also there is room for improvement in the area of data discovery
Carla Borges | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Review Source
  • It performs a conventional disk-based process when the data sets are too large to fit into memory, which is very useful because, regardless of the size of the data, it is always possible to store them.
  • It has great speed and ability to join multiple types of databases and run different types of analysis applications. This functionality is super useful as it reduces work times
  • Apache Spark uses the data storage model of Hadoop and can be integrated with other big data frameworks such as HBase, MongoDB, and Cassandra. This is very useful because it is compatible with multiple frameworks that the company has, and thus allows us to unify all the processes.
  • Increase the information and trainings that come with the application, especially for debugging since the process is difficult to understand.
  • It should be more attentive to users and make tutorials, to reduce the learning curve.
  • There should be more grouping algorithms.
Nitin Pasumarthy | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Review Source
  • Rich APIs for data transformation making for very each to transform and prepare data in a distributed environment without worrying about memory issues
  • Faster in execution times compare to Hadoop and PIG Latin
  • Easy SQL interface to the same data set for people who are comfortable to explore data in a declarative manner
  • Interoperability between SQL and Scala / Python style of munging data
  • Documentation could be better as I usually end up going to other sites / blogs to understand the concepts better
  • More APIs are to be ported to MLlib as only very few algorithms are available at least in clustering segment
Anson Abraham | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Machine Learning.
  • Data Analysis
  • WorkFlow process (faster than MapReduce).
  • SQL connector to multiple data sources
  • Memory management. Very weak on that.
  • PySpark not as robust as scala with spark.
  • spark master HA is needed. Not as HA as it should be.
  • Locality should not be a necessity, but does help improvement. But would prefer no locality
Score 7 out of 10
Vetted Review
Verified User
Review Source
  • We used to make our batch processing faster. Spark is faster in batch processing than MapReduce with it in memory computing
  • Spark will run along with other tools in the Hadoop ecosystem including Hive and Pig
  • Spark supports both batch and real-time processing
  • Apache Spark has Machine Learning Algorithms support
  • Consumes more memory
  • Difficult to address issues around memory utilization
  • Expensive - In-memory processing is expensive when we look for a cost-efficient processing of big data
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Ease of use, the Spark API allows for minimal boilerplate and can be written in a variety of languages including Python, Scala, and Java.
  • Performance, for most applications we have found that jobs are more performant running via Spark than other distributed processing technologies like Map-Reduce, Hive, and Pig.
  • Flexibility, the frameworks comes with support for streaming, batch processing, sql queries, machine learning, etc. It can be used in a variety of applications without needing to integrate a lot of other distributed processing technologies.
  • Resource heavy, jobs, in general, can be very memory intensive and you will want the nodes in your cluster to reflect that.
  • Debugging, it has gotten better with every release but sometimes it can be difficult to debug an error due to ambiguous or misleading exceptions and stack traces.
Kamesh Emani | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Review Source
  • Spark uses Scala which is a functional programming language and easy to use language. Syntax is simpler and human readable.
  • It can be used to run transformations on huge data on different cluster parallelly. It automatically optimizes the process to get output efficiently in less time.
  • It also provides machine learning API for data science applications and also Spark SQL to query fast for data analysis.
  • I also use Zeppelin online tool which is used to fast query and very helpful for BI guys to visualize query outputs.
  • Data visualization.
  • Waiting for Web Development for small apps to be started with Spark as backbone middleware and HDFS as data retrieval file system.
  • Transformations and actions available are limited so must modify API to work for more features.
June 26, 2017

Sparkling Spark

Sunil Dhage | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Review Source
  • It makes the ETL process very simple when compared to SQL SERVER and MYSQL ETL tools.
  • It's very fast and has many machine learning algorithms which can be used for data science problems.
  • It is easily implemented on a cloud cluster.
  • The initialization and spark context procedures.
  • Running applications on a cluster is not well documented anywhere, some applications are hard to debug.
  • Debugging and Testing are sometimes time-consuming.
Jordan Moore | TrustRadius Reviewer
Score 8 out of 10
Vetted Review
Verified User
Review Source
  • Scale from local machine to full cluster. You can run a standalone, single cluster simply by starting up a Spark Shell or submitting an application to test an algorithm, then it quickly can be transferred and configured to run in a distributed environment.
  • Provides multiple APIs. Most people I know use Python and/or Java as their main programming language. Data scientists who are familiar with NumPy and SciPy can quickly become comfortable with Spark, while Java developers would best served using Java 8 and the new features that it provides. Scala, on the other hand, is a mix between the Java and Python styles of writing Spark code, in my opinion.
  • Plentiful learning resources. The Learning Spark book is a good introduction to the mechanics of Spark although written for Spark 1.3, and the current version is 2.0. The GitHub repository for the book contains all the code examples that are discussed, plus the Spark website is also filled with useful information that is simple to navigate.
  • For data that isn't truly that large, Spark may be overkill when the problem could likely be solved on a computer with reasonable hardware resources. There doesn't seem to be a lot of examples for how a Spark task would otherwise be implemented in a different library; for instance scikit-learn and NumPy rather than Spark MLlib.