OpenText Optimost vs. Optimizely Feature Experimentation

Overview
ProductRatingMost Used ByProduct SummaryStarting Price
OpenText Optimost
Score 7.0 out of 10
N/A
OpenText Optimost is designed to help companies deliver engaging, profitable websites and campaigns and includes self-service capabilities. Optimost also provides white glove consulting to help companies test confidently when the stakes and complexity are highest; immediately when speed is of the essence, and to match the perfect content to every customer.N/A
Optimizely Feature Experimentation
Score 8.3 out of 10
N/A
Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.N/A
Pricing
OpenText OptimostOptimizely Feature Experimentation
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
OpenText OptimostOptimizely Feature Experimentation
Free Trial
NoNo
Free/Freemium Version
NoYes
Premium Consulting/Integration Services
NoYes
Entry-level Setup FeeNo setup feeRequired
Additional Details
More Pricing Information
Community Pulse
OpenText OptimostOptimizely Feature Experimentation
Best Alternatives
OpenText OptimostOptimizely Feature Experimentation
Small Businesses
Convert Experiences
Convert Experiences
Score 9.9 out of 10
GitLab
GitLab
Score 8.7 out of 10
Medium-sized Companies
Dynamic Yield
Dynamic Yield
Score 9.0 out of 10
GitLab
GitLab
Score 8.7 out of 10
Enterprises
Dynamic Yield
Dynamic Yield
Score 9.0 out of 10
GitLab
GitLab
Score 8.7 out of 10
All AlternativesView all alternativesView all alternatives
User Ratings
OpenText OptimostOptimizely Feature Experimentation
Likelihood to Recommend
10.0
(1 ratings)
8.3
(48 ratings)
Likelihood to Renew
10.0
(1 ratings)
4.5
(2 ratings)
Usability
-
(0 ratings)
7.7
(27 ratings)
Support Rating
-
(0 ratings)
3.6
(1 ratings)
Implementation Rating
-
(0 ratings)
10.0
(1 ratings)
Product Scalability
-
(0 ratings)
5.0
(1 ratings)
User Testimonials
OpenText OptimostOptimizely Feature Experimentation
Likelihood to Recommend
OpenText
The ease of implementation combined with the managed services result in a tool that virtually anyone can use - implementation is less than 10 lines of code added to the relevant pages of the website (we simply added it to our master page template to have it available on any page) and from there the customer can be as involved or not involved as they wish. At BSI we are very hands on with the testing programme - usually developing and designing the tests ourselves and having HP build them, but if we wanted to HP to develop, design and build and limit our role to QA and review that is an option.
Read full review
Optimizely
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Read full review
Pros
OpenText
  • Because it is a managed service the need for intervention by our internal IT group was removed. This allowed us to control the pace of the testing programme without being influenced by IT resource allocation
  • The client and technical account managers are very good at suggesting tests or potential improvements
  • HP regularly holds custom forums which are always informative and provide an opportunity to learn from and network with peers and industry leaders
Read full review
Optimizely
  • It is easy to use any of our product owners, marketers, developers can set up experiments and roll them out with some developer support. So the key thing there is this front end UI easy to use and maybe this will come later, but the new features such as Opal and the analytics or database centric engine is something we're interested in as well.
Read full review
Cons
OpenText
  • The dashboard interface is difficult to navigate, but I understand that they are currently developing/testing a new much more user friendly interface
  • The cost can be a barrier for some organisations, but for us it is worth it. Also they are in the process of releasing a less expensive self authoring testing tool.
Read full review
Optimizely
  • Would be nice to able to switch variants between say an MVT to a 50:50 if one of the variants is not performing very well quickly and effectively so can still use the standardised report
  • Interface can feel very bare bones/not very many graphs or visuals, which other providers have to make it a bit more engaging
  • Doesn't show easily what each variant that is live looks like, so can be hard to remember what is actually being shown in each test
Read full review
Likelihood to Renew
OpenText
We have not only renewed our subscription three years running, but we have added the self authoring tool and are looking to expand the subscription so that we can take advantage of the managed services on a global level.
Read full review
Optimizely
Competitive landscape
Read full review
Usability
OpenText
No answers on this topic
Optimizely
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
Read full review
Support Rating
OpenText
No answers on this topic
Optimizely
Support was there but it was pretty slow at most times. Only after escalation was support really given to our teams
Read full review
Implementation Rating
OpenText
No answers on this topic
Optimizely
It’s straightforward. Docs are well written and I believe there must be a support. But we haven’t used it
Read full review
Alternatives Considered
OpenText
We evaluated Optimost again Adobe's similar offering (Target). The big difference between the two and the reason why BSI choose Autonomy was the managed service aspect. The idea that once the code was deployed on the site IT no longer had to be involved gave my team full ownership of the testing programme. With the Adobe product, the involvement of the internal IT group would have been required to launch each test - and this would have decreased the number of tests we could run each month. Back in the day I also used offermatica/omniture and this too required IT involvement.
Read full review
Optimizely
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
Read full review
Scalability
OpenText
No answers on this topic
Optimizely
had troubles with performance for SSR and the React SDK
Read full review
Return on Investment
OpenText
  • Use HP Optimost was the primary driver behind a 40% increase in UK classroom training courses booked online read more details here: http://www.autonomy.com/work/news/details/hsx6767d
  • HP Optimost testing led to a 9% increase in sales by improving the BSI Shop's checkout funnel in 2012
  • HP Optimost is integral to the success of BSI's continuous improvement testing programme
Read full review
Optimizely
  • We have a huge, noteworthy ROI case study of how we did a SaaS onboarding revamp early this year. Our A/B test on a guided setup flow improved activation rates by 20 percent, which translated to over $1.2m in retained ARR.
Read full review
ScreenShots

Optimizely Feature Experimentation Screenshots

Screenshot of Feature Flag Setup. Here users can run flexible A/B and multi-armed bandit tests, as well as:

- Set up a single feature flag to test multiple variations and experiment types
- Enable targeted deliveries and rollouts for more precise experimentation
- Roll back changes quickly when needed to ensure experiment accuracy and reduce risks
- Increase testing flexibility with control over experiment types and delivery methodsScreenshot of Audience Setup. This is used to target specific user segments for personalized experiments, and:

- Create and customize audiences based on user attributes
- Refine audience segments to ensure the right users are included in tests
- Enhance experiment relevance by setting specific conditions for user groupsScreenshot of Experiment Results, supporting the analysis and optimization of experimentation outcomes. Viewers can also:

- examine detailed experiment results, including key metrics like conversion rates and statistical significance
- Compare variations side-by-side to identify winning treatments
- Use advanced filters to segment and drill down into specific audience or test dataScreenshot of a Program Overview. These offer insights into any experimentation program’s performance. It also offers:

- A comprehensive view of the entire experimentation program’s status and progress
- Monitoring for key performance metrics like test velocity, success rates, and overall impact
- Evaluation of the impact of experiments with easy-to-read visualizations and reporting tools
- Performance tracking of experiments over time to guide decision-making and optimize strategiesScreenshot of AI Variable Suggestions. These enhance experimentation with AI-driven insights, and can also help with:

- Generating multiple content variations with AI to speed up experiment design
- Improving test quality with content suggestions
- Increasing experimentation velocity and achieving better outcomes with AI-powered optimizationScreenshot of Schedule Changes, to streamline experimentation. Users can also:

- Set specific times to toggle flags or rules on/off, ensuring precise control
- Schedule traffic allocation percentages for smooth experiment rollouts
- Increase test velocity and confidence by automating progressive changes