Data Warehousing with a hint of OLTP
July 22, 2016

Data Warehousing with a hint of OLTP

Seth Goldberg | TrustRadius Reviewer
Score 7 out of 10
Vetted Review
Verified User
Review Source

Overall Satisfaction with Oracle Data Warehouse

Oracle Data Warehouse was used at a previous company to be the central data warehouse. Analysts connected to it to run queries on all types of different data like customer, transaction, and behavior data. It was also used to power reporting being done by the business.
  • Strong developer toolset. e.g. PL/SQL, partitioning, compression, etc.
  • Rich syntax
  • Rock solid reliability
  • Compatibility with other tools
  • Industry support
  • More automated functionality (e.g. automated table analysis, better automated partitioning)
  • Support for shared nothing architecture
  • Steep learning curve
  • Allowed the organization to conduct more thorough data supported analysis
  • Provided insights that were previously not available to the business
  • Allowed the organization to become more data driven
Oracle is a lot cheaper than traditional data warehouse appliance solutions, even if you get an expensive DBA who knows what he/she is doing. It definitely takes a lot more work to ensure it scales as your data size grows. While it won't scale past the terabyte sized data sets, the organization was nowhere near that point in terms of data velocity. It also was extremely flexible unlike most other data warehousing products since it is more OLTP in nature. This allows you to use languages like PL/SQL to do complex transformation that might not be possible in SQL. This allowed us to implement much more sophisticated solutions. Overall, it is a good product that provides a good blend of OLAP and OLTP features and performance.
Scenarios where appropriate:
  • Heavy investment in other Oracle databases
  • Availability of knowledgable Oracle staff
  • Plenty of money for the database and all the add ons
  • Need for well supported platform
  • Data sets that are not ridiculously big. Once you start hitting table sizes in the hundreds of gigs, it starts getting very hard to scale
Scenarios where not appropriate:
  • Limited budget
  • Desire to use open source software
  • HUGE datasets. Until the architecture can operate in a shared nothing fashion, it will only scale to the size of the biggest box you can get. Even that may not be enough...
  • Lots of semi/unstructured data
  • Staff has limited knowledge on tuning it