Lookback vs. Optimizely Feature Experimentation

Overview
ProductRatingMost Used ByProduct SummaryStarting Price
Lookback
Score 7.5 out of 10
N/A
Lookback is a UX research platform for mobile & desktop moderated and unmoderated research, from the company of the same name in Palo Alto.N/A
Optimizely Feature Experimentation
Score 7.7 out of 10
N/A
Optimizely Feature Experimentation combines experimentation, feature flagging and built for purpose collaboration features into one platform.N/A
Pricing
LookbackOptimizely Feature Experimentation
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
LookbackOptimizely Feature Experimentation
Free Trial
NoNo
Free/Freemium Version
NoYes
Premium Consulting/Integration Services
NoYes
Entry-level Setup FeeNo setup feeRequired
Additional Details
More Pricing Information
Community Pulse
LookbackOptimizely Feature Experimentation
Top Pros

No answers on this topic

Top Cons

No answers on this topic

Best Alternatives
LookbackOptimizely Feature Experimentation
Small Businesses
Smartlook
Smartlook
Score 8.3 out of 10
GitLab
GitLab
Score 8.6 out of 10
Medium-sized Companies
Optimal Workshop
Optimal Workshop
Score 9.2 out of 10
GitLab
GitLab
Score 8.6 out of 10
Enterprises
Optimal Workshop
Optimal Workshop
Score 9.2 out of 10
GitLab
GitLab
Score 8.6 out of 10
All AlternativesView all alternativesView all alternatives
User Ratings
LookbackOptimizely Feature Experimentation
Likelihood to Recommend
8.6
(2 ratings)
7.8
(29 ratings)
Likelihood to Renew
-
(0 ratings)
4.6
(2 ratings)
Usability
8.0
(1 ratings)
7.4
(8 ratings)
Implementation Rating
-
(0 ratings)
10.0
(1 ratings)
Product Scalability
-
(0 ratings)
5.0
(1 ratings)
User Testimonials
LookbackOptimizely Feature Experimentation
Likelihood to Recommend
Lookback
Best suited to conduct remote interviews that are moderated and facilitated by the interviewer/researcher.
Not the best if you want to do it unmoderated, there are much more sophisticated tools out there. Unfortunately, for a design research team that does both these kids of research, it can be hard to get budgets to get two softwares and hence the Unmoderated Feature can seem super undercooked and doesn’t really do the job.
Otherwise it’s a great tool
Read full review
Optimizely
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Read full review
Pros
Lookback
  • Organization of user interviews
  • Sharing of interviews across the team
  • Creating highlights of insights
Read full review
Optimizely
  • Its ability to run A/B tests and multivariate experiments simultaneously allows us to identify the best-performing options quickly.
  • Optimizely blends into our analytics tools, giving us immediate feedback on how our experiments are performing. This tool helps us avoid interruptions. With this pairing, we can arrive at informed decisions quickly.
  • Additionally, feature toggles enable us to introduce new features or modifications to specific user groups, guaranteeing a smooth and controlled user experience. This tool helps us avoid interruptions.
Read full review
Cons
Lookback
  • Unmoderated interviews is still under cooked as a feature
  • The process of how participants have to download an app to start an interview is a large friction point for us
Read full review
Optimizely
  • Splitting feature flags from actual experiments is slightly clunky and can be done either as part of the same page or better still you can create a flag on the spot while starting an experiment and not always needing to start with a flag.
  • Recommending metrics to track based on description using AI
Read full review
Likelihood to Renew
Lookback
No answers on this topic
Optimizely
Competitive landscape
Read full review
Usability
Lookback
Once you understand how the interface works, it works great, but there is a learning curve
Read full review
Optimizely
I think setting up experiments is very straightforward. It's also very easy to get started on the code side. I think if someone was new to Optimizely Feature Experimentation there could be some confusion between a flag and an experiment. I still get confused sometimes by if I turned the right thing on or off.
Read full review
Implementation Rating
Lookback
No answers on this topic
Optimizely
It’s straightforward. Docs are well written and I believe there must be a support. But we haven’t used it
Read full review
Alternatives Considered
Lookback
Zoom was way more expensive and it o is designed to other things apart from just running qualitative interviews. It also requires a different kind of approval and different approval processes to go through when trying to get it simply for qualitative research purposes.
Lookback records, scribes, helps observe and provides a sentiment check as well in the price that it does
Read full review
Optimizely
We haven't evaluated other products. We have an in-house product that is missing a lot of features and is very behind from making the test process easier. Instead of evolving our in-house product with limited resources, we decided to go with Optimizely Feature Experimentation when we saw that other big organisations are partnering with you.
Read full review
Scalability
Lookback
No answers on this topic
Optimizely
had troubles with performance for SSR and the React SDK
Read full review
Return on Investment
Lookback
  • It allows us to understand our customers’ problems in a very team compatible way.
Read full review
Optimizely
  • Experimentation is key to figuring out the impact of changes made on-site.
  • Experimentation is very helpful with pricing tests and other backend tests.
  • Before running an experiment, many factors need to be evaluated, such as conflicting experiments, audience, user profile service, etc. This requires a considerable amount of time.
Read full review
ScreenShots

Optimizely Feature Experimentation Screenshots

Screenshot of AI Variable suggestions: AI helps to develop higher quality experiments. Optimizely’s Opal suggests content variations in experiments, and helps to increase test velocity  and improve experiment qualityScreenshot of Integrations: display of the available integrations in-app.Screenshot of Reporting used to share insights, quantify experimentation program performance using KPIs like velocity and conclusive rate across experimentation projects, and to drill down into the charts and figures to see an aggregate list of experiments. Results can be exported into a CSV or Excel file, and KPIs can be segmented using project filters, experiment type filters, and date rangesScreenshot of Collaboration: Centralizes tracking tasks in the design, build, and launch of an experiment to ensure experiments are launched on time . Includes calendar, timeline, and board views in customizable views that can be saved to share with other stakeholdersScreenshot of Scheduling: Users can schedule a Flag or Rule to toggle on/off,  traffic allocation percentages, and achieve faster experimentation velocity and smoother progressive rolloutsScreenshot of Metrics filtering: Dynamic event properties to filter through events. Dynamic events provide better insights for experimenters who can explore metrics in depth for more impactful decisions