Entities that are NOT used/NOT associated (although supposed to be).
N-LOOKUP/hierarchical data visualization (example: hierarchically organized Operational codes, prices, business conditions and busn programs).
Makes good compression of data for up to 100 Mln transactional rows (I have compared head-to-head with SSAS cubes=not indexed). QLV works faster. Give QLV at least 32 Gig of RAM on 64 bit architecture, you will see the instant response for many millions of transactions, with instant grouping and counters.
Data extraction from sources is separated from graphical part. This means you can provide ETL part (extract/periodical extract from sources) separately, and populate QVD (intermediate compressed format for all users). All users will connect their QLV reports to this QVD format, thus NEVER loading/affecting the (OLTP/DW) source at all.
GUI interface is simple enough. Copying of the controls is simple. Filter selections, made in the controls by user, are saved by default/reproduced on QLW report closing/reopening. In other words, user keeps his/her specific business context. "Current Selections" (a simple GUI control) allows to visualize the business context.
Actions/Triggers (on a document/tab or user level) allow to pre-populate user filter selections.
Outrageous marketing: all QLV manuals start with the words; "Lets take a flat file as Data Source". Information is NOT supposed to be kept in Flat files. This is written in the Chapter 1 of any Introduction to Data Processing text book in the world.
QLV first mentions of the POSSIBILITY of the connection to relational DB on the page 200 of the QLV manual(!). In fact, the Relational Data base source is the "second class citizen" in QLV. You have to go into data extraction Script, instead of just specifying OLE DB or ODBC as a data source!
Qlikview sales people could answer the simple question: does QLV work against relational DB- yes or no? Their answer was: "The question is too technical". The company clearly does not understand how to position its (absolutely wonderful) product. As a BACKEND data discovery and analysys product.
They will NEVER EVER use the word: "Relational DB Source" and "Powerful ETL capabilities" in their marketing. It is 100% concentrated on GUI part/interface and their QVD (compressed proprietary format) storage. NOT on real-life data extraction from real DB source. That is a shame, Marketing department does not understand their target audience.
QLV help files are rudimentary, do not give good examples on actual DB data extraction and analysis. They might have done it on example of Microsoft Sample Databases - this was never done.
You can specify the % of RAM memory consumed by QLV. On reaching this percentage, QLV, unfortunately, becomes unresponsive. Abort (of the running script) does not work well. Modality of the script window and of help files is not chosen correctly. QLV may hang and crush on large data.
ETL delivery (validation/direct comparison, driven by hash values on source and target).
Operational processes with multiple (disjointed/independent) sources.
Flat and XML file comparison.
In a situation of 20-30 different entities, QlikView will "join everything to everything" and will show what is associated with what - by DEFAULT - going through a large number of intermediate joins. It will do it in a sub-second time with up to millions of entities. In 70% of the cases you will see correctly joined/pertinent data. However, to go beyond, you will need to introduce meaningful composite keys, hash values for columns being compared, review case sensitivity of the values, analyze for cross joins/absence of relationship for some of the entities.