Incredibly fast in-memory model.
November 13, 2012
Incredibly fast in-memory model.
Score 8 out of 10
Vetted Review
Software Version
Commercial
Overall Satisfaction
- Self-service
- In-memory performance
- Associative model
- Their UI from a development perspective, for developing dashboards.
- A lot of the components that you can place on a dashboard, i.e. filters, sliders etc. are handy
- You can spit out to Excel, PDF.
- Out of the box, the governance and meta data management is not great. You can buy another product for that. Out of the box, you can get yourself in trouble. We have solved for that through business process and workflow.
- They are still a bit tied too Microsoft tools like Internet Explorer. Working on Firefox, Chrome, Safari is not the same experience. We would really like them adapt. For example, when viewing a line graph with multiple points on graph, if you zoom over a point, it will light up the bubble in IE, but we cannot get it to work the same way in other browsers.
- Performance tuning explain plans don’t exist.
- Our ability to do custom Ajax development – we would like to put in a widget, where we can do an uptime call and have nothing else change. No documentation etc.
- Documentation is ok.
- Speed to market is the really big thing. You can attach to multiple data sources quickly and build a consumable model for a dashboard. It doesn’t require IT talent to build. We have built more dashboards and added more users in the last year, then in our entire history. I was at a company of 30k+ employees before, and we didn't have near this level of BI adoption.
- As a result, we are seeing benefits across business function. For example, within sales, our pipeline has much more visibility. It allows for much faster decisions on things like quotas. One of our biggest power users is in sales ops. She feels her dashboards load 10x faster than our previous tool and she can make changes on the fly.
Product Usage
200 - We have 200 people/ day use the software as end users consuming dashboards/reports. It encompasses almost every department in our enterprise – cloud, sales, support, finance, accounting, security, procurement/supply chain, IT. It is truly distributed.
Since starting use, we have had more than half the company – 2800+ unique people use the system as some point. It is the most adopted BI tool I have ever implemented.
In total we have 40-50 power users building models in the system, again distributed across business functions. Those power users are supported by 1-2 people in my department, IT. We have a tight nit relationship with our power users. Our users can build their own QVDs – models. Now to get them into production, we (IT) review them to make sure they are not duplicating existing models, and not doing something that QVD is not meant to do. We also tune them to the best degree we can as a bad dashboard can slowdown the system. I will say that QlikView's monitoring consoles are very cool. We can see the top running queries, unique users, and the trending of consumption of dashboards.
We (IT) do two training classes month - one for basic usage, and one for power users, building models in the system.
Since starting use, we have had more than half the company – 2800+ unique people use the system as some point. It is the most adopted BI tool I have ever implemented.
In total we have 40-50 power users building models in the system, again distributed across business functions. Those power users are supported by 1-2 people in my department, IT. We have a tight nit relationship with our power users. Our users can build their own QVDs – models. Now to get them into production, we (IT) review them to make sure they are not duplicating existing models, and not doing something that QVD is not meant to do. We also tune them to the best degree we can as a bad dashboard can slowdown the system. I will say that QlikView's monitoring consoles are very cool. We can see the top running queries, unique users, and the trending of consumption of dashboards.
We (IT) do two training classes month - one for basic usage, and one for power users, building models in the system.
1.5 - We have 1-2 people in the IT department in governance roles. They were BI developers previously.
We have 40-50 power users in the business.
We have 40-50 power users in the business.
- Data visualization/ reporting for multiple aspects of our operations including sales, marketing, service, procurement, finance and IT.
Evaluation and Selection
SSRS – Microsoft Report Services
Our shortlist included Tableau, a newer version of Microsoft's SSRS and Qlikview.
We also knew what Micro Strategies and Business Objects had to offer from our experiences with them at other companies, and knew what we could afford. We eliminated them on price and the complexity of set-up.
We liked QlikView's in memory, associative model, and self-service capability.
With "in-memory", everything that gets consumed from a dashboard is loaded in memory so it is incredibly fast. Although Qlik touts its mobile distribution capabilities, that was not a huge differentiator for us, but is something we are now exploring. Microsoft and and BO do have in-memory capabilities now.
The associative model is patented by QlikView. Basically it starts to understand the associations in your data, e.g. if A=B and B=C, then A=C. It means you can build a Qlikview model very quickly. In traditional data warehousing, you work with users and understand their requirements, refine the data model, start to physicalize it, tune it, build ETLs. It's a 3 month at best delivery cycle. That doesn’t work here at this company. Our business users will not wait for you. Our business is dynamic, we are launching new products all the time. Instead of going through an arduous process, you just load data into QlikView and build an associative model. It link things up. If it doesn’t work, you can change things very quickly. You don’t have to write data definitional language. Where you get into trouble, is if you load very large data sets into QlikView, memory not as abundant. They are releasing a tool called data explorer which allows you to do a hybrid approach – load some in memory and some in database. If data is going into frequently used dashboards it goes into memory. If infrequent access is required and the data set is very large, it makes sense to leave it in the database.
We also have a company called DataRoket that builds connectors for us, e.g. load this relational data into QlikView. They have built adapters for QlikView-Hadoop integration.
We also knew what Micro Strategies and Business Objects had to offer from our experiences with them at other companies, and knew what we could afford. We eliminated them on price and the complexity of set-up.
We liked QlikView's in memory, associative model, and self-service capability.
With "in-memory", everything that gets consumed from a dashboard is loaded in memory so it is incredibly fast. Although Qlik touts its mobile distribution capabilities, that was not a huge differentiator for us, but is something we are now exploring. Microsoft and and BO do have in-memory capabilities now.
The associative model is patented by QlikView. Basically it starts to understand the associations in your data, e.g. if A=B and B=C, then A=C. It means you can build a Qlikview model very quickly. In traditional data warehousing, you work with users and understand their requirements, refine the data model, start to physicalize it, tune it, build ETLs. It's a 3 month at best delivery cycle. That doesn’t work here at this company. Our business users will not wait for you. Our business is dynamic, we are launching new products all the time. Instead of going through an arduous process, you just load data into QlikView and build an associative model. It link things up. If it doesn’t work, you can change things very quickly. You don’t have to write data definitional language. Where you get into trouble, is if you load very large data sets into QlikView, memory not as abundant. They are releasing a tool called data explorer which allows you to do a hybrid approach – load some in memory and some in database. If data is going into frequently used dashboards it goes into memory. If infrequent access is required and the data set is very large, it makes sense to leave it in the database.
We also have a company called DataRoket that builds connectors for us, e.g. load this relational data into QlikView. They have built adapters for QlikView-Hadoop integration.
Implementation
- Vendor implemented
Training
- In-person training
- Self-taught
My team holds training sessions for our internal users every month. 8-10 of our staff attend each month. We have an intro class, and a power user class.
Support
Usability
Reliability
Integration
- Various databases/ data containers including Oracle, SQL Server, Cassandra (which we use for time series event data, monitoring). We do not integrate directly to operational systems, e.g. for finance, CRM, but push data from those enterprise apps into a data layer, so that we're not taxing those operational systems with queries. We have also built a star schema data mart for the cloud.
It was pretty simple to achieve via ANSI SQL adapter, ODBC.
QlikView also hasa 3rd party building data adapters from different systems like Hadoop.
We use Informatica to pull data from operational systems into the data layer.
QlikView also hasa 3rd party building data adapters from different systems like Hadoop.
We use Informatica to pull data from operational systems into the data layer.
Vendor Relationship
Make sure you do a bake-off to compare Tableau and other best in class systems like Microsoft, PowerPivot.
Really understand their license modeling as it’s changed. They didn’t have enterprise licensing model when they started. You really need to think about the taxonomy of your users. For example, power user licenses are different from end-user.
You also need to understand concurrency.
Really understand their license modeling as it’s changed. They didn’t have enterprise licensing model when they started. You really need to think about the taxonomy of your users. For example, power user licenses are different from end-user.
You also need to understand concurrency.