Best suited to conduct remote interviews that are moderated and facilitated by the interviewer/researcher.
Not the best if you want to do it unmoderated, there are much more sophisticated tools out there. Unfortunately, for a design research team that does both these kids of research, it can be hard to get budgets to get two softwares and hence the Unmoderated Feature can seem super undercooked and doesn’t really do the job.
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Its ability to run A/B tests and multivariate experiments simultaneously allows us to identify the best-performing options quickly.
Optimizely blends into our analytics tools, giving us immediate feedback on how our experiments are performing. This tool helps us avoid interruptions. With this pairing, we can arrive at informed decisions quickly.
Additionally, feature toggles enable us to introduce new features or modifications to specific user groups, guaranteeing a smooth and controlled user experience. This tool helps us avoid interruptions.
Splitting feature flags from actual experiments is slightly clunky and can be done either as part of the same page or better still you can create a flag on the spot while starting an experiment and not always needing to start with a flag.
Recommending metrics to track based on description using AI
I think setting up experiments is very straightforward. It's also very easy to get started on the code side. I think if someone was new to Optimizely Feature Experimentation there could be some confusion between a flag and an experiment. I still get confused sometimes by if I turned the right thing on or off.
Zoom was way more expensive and it o is designed to other things apart from just running qualitative interviews. It also requires a different kind of approval and different approval processes to go through when trying to get it simply for qualitative research purposes.
Lookback records, scribes, helps observe and provides a sentiment check as well in the price that it does
We haven't evaluated other products. We have an in-house product that is missing a lot of features and is very behind from making the test process easier. Instead of evolving our in-house product with limited resources, we decided to go with Optimizely Feature Experimentation when we saw that other big organisations are partnering with you.
Experimentation is key to figuring out the impact of changes made on-site.
Experimentation is very helpful with pricing tests and other backend tests.
Before running an experiment, many factors need to be evaluated, such as conflicting experiments, audience, user profile service, etc. This requires a considerable amount of time.