Reviews (1-8 of 8)
The ability to integrate data from different sources into a single version is a big plus for Data Virtualization
- Reduce duplication of data
- Easy to use and run queries on different data sources
- It is versatile but it also makes it quite complex if you haven't already thought it out fully before you start writing the views.
- Performance is highly dependent on queries used
Cloetta’s data was mainly stored in an enterprise data warehouse (EDW). The time needed to make new data available for reporting through the EDW was relatively long. Local and external data comes in high volumes, is scattered in many parts or objects, and is unstructured. The business expects a fast time to solution, thus eventually data needs to be retrieved in real time.
The business was manually creating reports, which was time consuming, inefficient, exposed to errors, and sometimes impossible due to large amounts of data. Data and reports were not readily available for everyone who needed them.
Cloetta wanted to find a way to combine global, local, internal, and external data into one comprehensive report. The limited number of users on some of these reports required that any solution be cost-efficient.
- Faster Time to Solution
- We can combine new data that we made available through data virtualization with the data that we had before in the enterprise data warehouse. It’s a strong combination.
- No integrated version management and no possibility for tem based development (with check-out and check-in)
- Performance, more out of the box query optimization
- 'Type ahead' and auto correction / detection of table and field names on script is not there, so requires ot of maual / textual work
Well suited to read data from Excel files, although to set up an efficient and error-proof process it requires you to create a foundation and that is quite time consuming, but once it's there, it works well.
Solution is structured nicely in a folder hierarchy.
Less apppropriate when you need to add a lot of business logic to the data, if you need to enrich the data. Such logic is hard to implement and slows down a lot.
Less appropriate for creating real-time insights, caching or storing on database is almost always required to get good performance.
- Rich data connectivity and intelligent analysis to discover explicit and implicit relations between data in various sources.
- Intelligent and rich query optimization algorithms to minimize pressure on source systems and make use of the performance enhancements capabilities of the source systems.
- There is one version of the TDV- tool that holds all functionality. No additional licenses needed, everything is included.
- MPP-functionality (TDV 8.0.0) makes automatic query distribution over large (big data) clusters possible and guarantees very high performance with querying 100's of millions of rows.
- The TDV Studio application combines the use of ANSI-SQL (you can script for more complex tasks) with a graphical user interface for modeling the data layers, both supporting fast development of data and web services. TDV Server is a complete package with all the enterprise grade functionality an organization may need.
- We would love to see an upgrade of the general look-and-feel of the user interface.
Data cleansing features before virtualisation features would need to be added.
- Scales out horizontally and vertically to support federated queries against growing data volumes
- Supports a wide range of data sources, including relational, no-SQL, message queues/streaming, and big data
- No very robust metadata and data lineage support
- Poor integration with data modeling tools, for forward- and reverse-engineering
- Response times are not always consistent and adequate in real-time queries
It is well suited in cases when there are many upstream heterogeneous sources, and the data sets are persisted in periodically refreshed cache, preferably database-based cache (although file-system or in-memory cache are options as well).
It is less appropriate for time-series analysis because the history is limited to the amount of history in he underlying source data repositories.
- For insurance companies, it is easy to link the client's bank account with the insurance company database
- Service Level Agreements such as product licenses can be linked with the companies employee databases
- Helpdesk Systems are easy to manage since one interface can provide information from multiple company databases that are related to a single account
- Sales and Marketing departments can make an assessment of their penetration level and environmental analysis
- Hospitals need Databases if the client has gone through multiple specialists and I think it'll be better if you add specialized platforms that are on centered for the health industry
- Businesses that are in partnership need to share resources and Accounting information e.g software development,
- Manufacturing industries to administer environmental climate changes in business analysis
TIBCO Data Virtualization Scorecard Summary
About TIBCO Data Virtualization
TIBCO Data Virtualization Competitors
TIBCO Data Virtualization Technical Details