Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.
N/A
Presto
Score 10.0 out of 10
N/A
Presto is an open source SQL query engine designed to run queries on data stored in Hadoop or in traditional databases.
Teradata supported development of Presto followed the acquisition of Hadapt and Revelytix.
N/A
SAP Analytics Cloud
Score 8.1 out of 10
N/A
The SAP Analytics Cloud solution brings together analytics and planning with integration to SAP applications and access to heterogenous data sources. As the analytics and planning solution within SAP Business Technology Platform, SAP Analytics Cloud supports trusted insights and integrated planning processes enterprise-wide to help make decisions without doubt.
$36
per month per user
Pricing
Apache Spark
Presto
SAP Analytics Cloud
Editions & Modules
No answers on this topic
No answers on this topic
SAP Analytics Cloud for Business Intelligence
$36.00
per month per user
SAP Analytics Cloud for Planning
Price upon request
per month per user
Offerings
Pricing Offerings
Apache Spark
Presto
SAP Analytics Cloud
Free Trial
No
No
Yes
Free/Freemium Version
No
No
No
Premium Consulting/Integration Services
No
No
No
Entry-level Setup Fee
No setup fee
No setup fee
No setup fee
Additional Details
—
—
A 30-day trial with SAP Analytics Cloud is available, supporting analytics enterprise-wide. A trial can be extended up to 90 days on request.
All the above systems work quite well on big data transformations whereas Spark really shines with its bigger API support and its ability to read from and write to multiple data sources. Using Spark one can easily switch between declarative versus imperative versus functional …
I think Presto is one of the best solutions out there today at the cutting edge for interactive query analysis. One of the challenges is presto is a niche tool for the interactive query use case and doesn't have the knobs and whistles as much as Spark. In the foreseeable future …
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
Presto is for interactive simple queries, where Hive is for reliable processing. If you have a fact-dim join, presto is great..however for fact-fact joins presto is not the solution.. Presto is a great replacement for proprietary technology like Vertica
>> Using SAC predictive analytics capabilities for inventory management in a Production line setup has helped generate Purchase Requisitions and Purchase Orders for raw or semi-finished goods without much head-banging into Demand management rules. It does it beautifully with seamless integration with HANA core MM and PP modules, along with BI integration. It has resulted in 30% greater warehouse storage capacity, thereby saving revenue from piled-up inventory and associated manpower costs. >> SAC sometimes shows latency in working out a large data set, thus giving a poor user experience compared to its competition. Also, it may occasionally show misinterpretations when embedding data from 3rd-party systems into the HANA core dataset.
Linking, embedding links and adding images is easy enough.
Once you have become familiar with the interface, Presto becomes very quick & easy to use (but, you have to practice & repeat to know what you are doing - it is not as intuitive as one would hope).
Organizing & design is fairly simple with click & drag parameters.
It makes it easier yo analyse order and related records easily.
We can easily maintain and track the performance of employees in organisation.
Can easily track various aspects for the growth of an organisation thus allowing real time analysis and tracking of organisation's growth and performance.
Presto was not designed for large fact fact joins. This is by design as presto does not leverage disk and used memory for processing which in turn makes it fast.. However, this is a tradeoff..in an ideal world, people would like to use one system for all their use cases, and presto should get exhaustive by solving this problem.
Resource allocation is not similar to YARN and presto has a priority queue based query resource allocation..so a query that takes long takes longer...this might be alleviated by giving some more control back to the user to define priority/override.
UDF Support is not available in presto. You will have to write your own functions..while this is good for performance, it comes at a huge overhead of building exclusively for presto and not being interoperable with other systems like Hive, SparkSQL etc.
SAC supports various data sources, but improvements in the ease of connecting to and integrating with certain data repositories, especially non-SAP databases, would enhance the platform's versatility and integration capabilities.
An offline mode for SAC could be valuable for users who need to access and analyze data without an internet connection. Additionally, optimizing performance for large datasets and complex visualizations would contribute to a smoother user experience.
We are planning to review the licensing as we have issues with SAC dealing with huge datasets. Analytics area is good for import models but when we have live connections in place that's when we have issue with SAC dealing with huge datasets in live be it BW or be it HANA models in the backend.
If the team looking to use Apache Spark is not used to debug and tweak settings for jobs to ensure maximum optimizations, it can be frustrating. However, the documentation and the support of the community on the internet can help resolve most issues. Moreover, it is highly configurable and it integrates with different tools (eg: it can be used by dbt core), which increase the scenarios where it can be used
On a scale of 1 to 10, I would rate 8 SAP Analytics Cloud's overall usability as a 7. SAC has a clean, modern user interface with drag-and-drop features. It is an integrated platform that combines reporting, planning, and predictive analytics in one tool. It has Real-time connectivity with SAP data sources like S/4HANA.
Self-service analytics capabilities allow non-technical users to build simple dashboards.
I would rate SAP Analytics Cloud an 8 out of 10 for scalability. It offers a flexible, cloud-based architecture that supports expansion across departments and geographies. The platform adapts well to growing data volumes and user needs, making it a strong choice for organizations looking to scale analytics capabilities efficiently.
I would rate SAP Analytics Cloud’s performance an 8 out of 10. Pages generally load quickly, and reports run within a reasonable time frame, even with complex datasets. Integration with other systems is smooth and doesn’t noticeably affect performance. Overall, it’s a responsive and efficient tool for business analytics. But
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Since the implementation stage, the support team has been very helpful and assisting. Even in the later stages, the tech team had quite a rapid response. In general, SAP has provided us with great customer support, let it be for a specific product of SAP or for integration of different modules.
In hindsight, it would have been easier to have someone there in person. Questions were answered, but with 11 participants, it got a bit chaotic online
SAC is a simple solution ad it works fine when connecting it to other SAP tools. On the other hand, connecting it to third party solutions brings difficulties when there's no previous design and the objetives are not clear. It is really important to integrate Business users from the start to provide with valuable business insights
Spark in comparison to similar technologies ends up being a one stop shop. You can achieve so much with this one framework instead of having to stitch and weave multiple technologies from the Hadoop stack, all while getting incredibility performance, minimal boilerplate, and getting the ability to write your application in the language of your choosing.
Presto is good for a templated design appeal. You cannot be too creative via this interface - but, the layout and options make the finalized visual product appealing to customers. The other design products I use are for different purposes and not really comparable to Presto.
SAP Analytics Cloud and Power BI are both tools that help businesses understand their data, but they have some differences. SAC, made by SAP, works well if your company already uses other SAP products. It's in the cloud, easy to use, and has features for analyzing data, getting insights, and planning for the future. Power BI, made by Microsoft, can be used in the cloud or on your own computers. It fits well with Microsoft tools, is easy to use, and can do advanced data analysis. SAC has built-in planning tools, while Power BI needs extra tools for detailed planning
Is good for use across multiple locations. It allows users to access data and reports from anywhere, regardless of their location. Can consolidate data from various sources, including different SAP systems and external sources, which facilitates cross-location analysis. SAC enables access to data and models from SAP Datasphere to create new stories. Detailed permissions can be defined for cross-departmental use.
Many manual data manipulations and exports in Excel have been replaced by the tool, providing management with improved insight into the amount of time spent at each stage of an invoice's lifetime, allowing bottlenecks to be discovered.
We now have more insight into the data, and people with little technical experience can easily build stories.