Tableau is an extremely powerful tool. It provides the flexibility of connecting to multiple variations of data types (Excel document, text, HIVE, Oracle, Postgres, etc) and moving between each with relative ease. It also allows the user to ‘blend’ data between two independent data sources, given a common key exists between them.
The visualization aspect is where Tableau really shines. It allows the user to not only create visually appealing representations of your data, but does so with an impressive amount of speed. Tableau supports extremely large datasets (millions of records), and allows the user the options of generating static extracts to take offline, or connecting live to a data source for real-time updates of data.
Additionally, the amount of formatting options available to the user at first is overwhelming. However, once comfortable with the tool (which took me about 1 month to get down the basics) having all of these options at your finger-tips is very much welcome, as it allows you to capture the exact ‘look and feel’ you are aiming to obtain.
We did have some challenges setting up an instance of each workbook to sit upon different environments. We were running 2 instances of Tableau sever; 1 for QA 1 for PROD. We needed an automated deployment process that modified the table connections to point to 1 of 4 different QA environments. We eventually found a custom solution with individual projects for each environment on a single server that met our needs – but the code behind it was very custom.
I also noticed that generating an extract, to which you can then place upon the server and set on a schedule to refresh, forces the user to generate the full extract local on their machine prior to publication. These extracts, depending on the size of the dataset and origin of the data, can take hours upon hours to finish, when simply setting up the extract on the server and letting the more powerful machine shoulder the processing load is unfortunately not an option during setup.