Overall Satisfaction with GoodData
GoodData provides an analytics engine that supports B2B implementation and white labeling. Their comprehensive API design allows for greater control over the user experience compared to most market offerings, enabling a near seamless transition between their software and another. Most recently, they've released a new set of products that allows even greater control over how a customer can use their engine in combination with 3rd party data warehousing platforms. GoodData provided us the analytics piece that could be tied into our software without it being a disruptive experience for the end user.
Gives us the ability to take data from an operational source and optimizes performance of of processing it for analytical and reporting purposes.
- Ability to embed and white-label.
- Simplified experience for non-technical users.
- Fantastic Implementation team.
- This is not a platform for data scientists.
- Poor ability to maintain code versioning of their workspaces and dashboards.
- Limited ways to securely transfer information between your system and theirs.
- Improved user experience in their default product.
- We no longer need to spend resources to develop in-house to compete with all the other data platforms.
- Excellent controls and options around embedding.
My answer isn't going to help you figure anything out. The biggest things that impact the ability to implementing a data platform are:
With a small and clean data set, using the base portal, and not having super high security standards, you could be up and running as quickly as Qlik, power BI, or Tableau. Realistically plan for a few months and invest some money in having their implementation team help your team to get everything working and all the kinks worked out. If you have a LOT of data and low expertise, plan for at least a year.
- How clean and consistent is your data. If you're not controlling the quality of the data, no platform is going to be able to process.
- How much data and how often are you refreshing it. Is it a full refresh all the time or are you just passing in deltas?
- How many sources are you pulling data from? Each source will require a lot of time to model, test, and harvest.
- How many resources do you have working on the project and how experienced are they.
- Do you intend to do white labeling and embedding? How skilled are your developers? Are they good at reading and following directions?
With a small and clean data set, using the base portal, and not having super high security standards, you could be up and running as quickly as Qlik, power BI, or Tableau. Realistically plan for a few months and invest some money in having their implementation team help your team to get everything working and all the kinks worked out. If you have a LOT of data and low expertise, plan for at least a year.
Ease of use, ability to organize the fields with different names and search them. The fact that rather than just breaking, it actively prevents you from trying to put data elements together that make no sense. There's quite a bit of guidance built into the product for a user who might feel lost.
Do you think GoodData delivers good value for the price?
Yes
Are you happy with GoodData's feature set?
Yes
Did GoodData live up to sales and marketing promises?
Yes
Did implementation of GoodData go as expected?
Yes
Would you buy GoodData again?
Yes
- Microsoft Power BI, Tableau Online, Qlik Analytics Platform, Domo, Yellowfin, Sisense, Looker, Google Analytics, ThoughtSpot, MicroStrategy Analytics, Pyramid Analytics, TIBCO Spotfire, Logi Analytics, Birst and IBM Cognos Analytics
Each one of the above players had an interesting platform. Ultimately, GoodData beat them based on our need for a customizable user experience, ability to embed, affordable license that is NOT on a per user basis (because if you want adoption, you shouldn't directly tie cost to number of users), and guaranteed performance for certain price points without having to get an incredibly powerful virtual server.
We wouldn't need to perform crazy complicated statistical formulas within the space and much of the self service would be targeting users who are non-technical (or at least building visualizations isn't their full-time job).
We wouldn't need to perform crazy complicated statistical formulas within the space and much of the self service would be targeting users who are non-technical (or at least building visualizations isn't their full-time job).