TIBCO EMS, the ideal enterprise service bus
September 20, 2020

TIBCO EMS, the ideal enterprise service bus

Anonymous | TrustRadius Reviewer
Score 8 out of 10
Vetted Review
Verified User

Overall Satisfaction with TIBCO Enterprise Message Service

TIBCO EMS is one of the fundamental and backbone components in our company's IT world. Because the hub-spoke architecture, millions of core business data from corporate and across all geological regions are distributed via TIBCO EMS, all JMS based core applications are driven by those change events, including manifest, scanning, revenue, address, clearance, and more other domains.
  • It is very stable and the performance is good to handle huge data volume.
  • Its global topic routing concept is very good, supporting the hub/spoke architecture.
  • It "glues" all other products from the Tibco (like BW, BE, ActiveSpace, etc) seamlessly.
  • When not using Publisher->Topic->Bridge->Queue->Consumer design but directly peer to peer communication via the Queue, if the volume is high, it has performance issues.
  • The native monitoring solution is for sure something should be added.
  • Data backed on the file system causing the pre-warming slow if the EMS is rebooted while we have a large piling up data.
  • Cross data center data in-sync is something else missing.
  • It promotes the event-driven a-synchronized architecture, boosting the service-oriented design change by decoupling the data generator and the data consumer.
  • Applications can be easily "attached" to the data flow to quickly roll out the business added values without having to risk the changes of existing applications or components.
  • Online training
  • in-person training
have better understanding of the overall architecture and benefit from the best practices
IBM MQ was the old product we used, and we migrated from it to TIBCO EMS. We use also RabbitMQ for our light-weight IPC scenarios and Kafka for our data stream use cases. They 3 compose our complete messaging and streaming solution.
It is very suitable to transfer the "master data events" among event-driven applications, and when the data is "passing-by" only and there is no need to "reprocess the data". But if more advanced data streaming is required and data re-fetching is required, then you should go to Kafka. And also it is too heavy for IPC (components within the same application) communication.