A non-technical, user friendly analytics tool
September 21, 2019

A non-technical, user friendly analytics tool

Anonymous | TrustRadius Reviewer
Score 8 out of 10
Vetted Review
Verified User

Overall Satisfaction with Sigma Computing

We use Sigma Computing to develop business reporting from our business analytics data. Sigma Computing effectively solved the problems of accessibility to the data and adoption of usage with our non-analyst users. Our front line managers found it to be very easy to use and understand because its interface modeled after simple spreadsheet sheet usage which they are already familiar with. Thu,s they did not need to learn the language of analytics to do more complex tasks like regressions and correlations.
  • Simple spreadsheet-like user interface. If you know Excel or Google Docs, you can use Sigma Computing to do complex analysis.
  • Flexible in that power users can narrow down result sets directly if they know how to do SQL, but SQL knowledge is not required to to analyze data.
  • Easy to share completed reports and graphical data with teams, without requiring team members to have paid accounts.
  • Because Sigma Computing sits directly on top of the analytics data warehouse it queries live data. Not the most efficient when it comes to speed and requires attention from data specialist to make sure the warehouse is optimized to support the queries.
  • Access control is loosely governed. Meaning pretty much granular access to specific tables or schemas hard to manage. Sharing a report means granting implicit access to the source data.
  • Change management is tricky. Renaming of tables means re-creating reports, rather than just being able to point existing reports at a new table.
  • Adoption of data to make decisions and measure goals.
  • Build-out of complex reports (regressions/cohort analysis) with relatively low levels of learning training; easy to use.
  • Drive-up run time (cost) on data warehouse cluster usage; both good and bad in that there's more usage of the data, but also no caching so a lot of extra unnecessary queries being run.
Very simple and fast to learn how to use; if you can use a spreadsheet you can use Sigma Computing - even if you know nothing about SQL. Flexible for those that have much more experience directly using SQL as a starting point for crafting their data sets, they get both the benefit of the simple UI, plus the ability to start from a SQL query. Visualization is very easy to use and guided quite well, making it easy for the end-user to express the analysis in a visual way quickly. Sharing is really easy to use.

From a data security standpoint, it is hard to understand their access control system and the implications of how sharing grants access. Hard to understand how to "tune" the data warehouse to deal with the added load generated from the easy to use UI.
Support is direct, responsive and up-front about things that are bugs. Their support team generates a lot of trust and confidence with the end-user. This is partially responsible for high adoption (also a large reason we failed with adoption of some of the other competing services to Sigma Computing). The support team responds quickly... but also the development team fixes many issues within hours of us reporting them. There are some cases where bugs take a long time to fix, but not very many of those cases. And support is always ready with a work around if it can not be fixed quickly.
We have a mix of technical ability in our user base. The largest percentage of our users do not know SQL. This makes it really difficult for them to leverage information in our data warehouses. For these users, the "tables" in our data warehouse simply appear as spreadsheets that they can point-and-click to customize the appearance of that data. The ability for users to combine data between tables without having to know SQL is a god-send! At the same time, nothing stops our more technical users from crafting their own SQL query if they want to. In some cases, our more technical users have become "lazy" because it's just too easy to point-and-click.
We have several analysts from different departments that work on shared reports. This is useful for them because they can all see the same report and modify sections of the data and report in the same location.
This one is a double-edge sword. The queries in Sigma Computing retrieve a fixed amount of data from a data set to optimize the amount of transfer, but there is still a hit to the data warehouse. It's also not perfect, meaning we still see timeouts and occasional network interruptions for very large data sets (data sets with very large row sizes or inclusive of binary data or blob data). From the end users perspective this makes it seem like millions of rows are easy to work with and this directly causes a behaviour of free experimentation with the user base. But from the data warehouse management perspective, the cluster definitely takes a hit and requires tuning to ensure each browser page reload doesn't slam into the warehouse for query results the user has already seen.
The pricing model was a key factor in choosing to use Sigma Computing. However, the understanding of what it means to be doing the "perform interactive analyses" still needs work. Sharing a report with a user, who then has a need to filter the results by date, or product segment or some other vector in the report is not "interactive analyses". So there needs to be a more crisp understanding of what the "Analyst user" means.
Largely most of the tools we looked at have a per-user access model of pricing. So sharing out a report means making a static non-updating snapshot of it in an "exported" kind of way that is painful to make available to users who do not need actual access to use visualization tools. Because of this, the pricing models inflate the cost of the tool. This is not a consideration with Sigma Computing because "viewers" are not billable users.

The user experience on most of these is geared toward analyst users - we actually licensed these and tested many of them with our user base for several months. The onus is on the end-user to "bootstrap" themselves through training and documentation to understand how to use the tool. This was not necessary at all with Sigma Computing.

All of these employ some kind of a "caching" mechanism to keep the reporting "fast" which is, in fact, a desirable thing. But there's configurable refresh on many of them which are optimized to toward report generation speed rather than data freshness. In Sigma's case, it always re-fetches the data (which introduces different problems) so the data is always up to date.
Sigma is great if...
  • Your user base is relatively low level of technical ability with analytics tools
  • You need to generate sharable reports
  • You must have access to up to the minute up to date information
  • You have the ability to support the back-end data warehouse by scaling a cluster to support heavyweight (auto-generated) SQL queries
Sigma is not so great if...
  • You have a highly technical analytics team as your user base
  • You need tightly controlled access to down to the table/row/field/schema levels
  • Your users primarily spend their time working in SQL

Sigma Feature Ratings

Pixel Perfect reports
Customizable dashboards
Report Formatting Templates
Formatting capabilities
Report sharing and collaboration
Publish to PDF
Report Delivery Scheduling
Pre-built visualization formats (heatmaps, scatter plots etc.)
Location Analytics / Geographic Visualization
Predictive Analytics