Overall Satisfaction with Optimizely Feature Experimentation
We are using Optimizely Feature Experimentation (FE) across all of our live eCommerce websites and we are actively using it across core experiences to test, analyse, experiment as well as activate features and control them on the spot through the various entities. The scope spans across multiple markets (UK/US) and we are aiming to always experiment major changes or go live in small alpha and beta groups before hitting the whole userbase with new features.
- AB experiments
- Multivariant experimentation
- Traffic split
- Applying filters and exclusions
- Connected end2end and analyzing metrics
- Implementing new features
- Splitting feature flags from actual experiments is slightly clunky and can be done either as part of the same page or better still you can create a flag on the spot while starting an experiment and not always needing to start with a flag.
- Recommending metrics to track based on description using AI
- We have improved various metrics throughout the course of our experimentation program with Optimizely and therefore sharing numbers is tricky. Essentially we only implement versions of the product that perform the best in terms of CVR, revenue/visitor, ATV, average order value, average basket size and so forth dependent on the north star we are trying to move with each release.
- Amplitude Analytics and Split
Overall, Optimizely Feature Experimentation is an industry leader in terms of experimentation across web and mobile. For apps I would say amplitude does slightly a better job as it is tailored to that niche.
Do you think Optimizely Feature Experimentation delivers good value for the price?
Yes
Are you happy with Optimizely Feature Experimentation's feature set?
Yes
Did Optimizely Feature Experimentation live up to sales and marketing promises?
I wasn't involved with the selection/purchase process
Did implementation of Optimizely Feature Experimentation go as expected?
Yes
Would you buy Optimizely Feature Experimentation again?
Yes