Lookback is a UX research platform for mobile & desktop moderated and unmoderated research, from the company of the same name in Palo Alto.
N/A
Optimizely Feature Experimentation
Score 8.3 out of 10
N/A
Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.
N/A
UserTesting
Score 8.4 out of 10
N/A
UserTesting helps UX researchers, designers, product teams, and marketers gather actionable insights through research, testing, and feedback. With a network of real people ready to share their perspectives, UserTesting enables organizations to make customer-first decisions at scale.
Optimizely FX is the only tool I've used that specifically allows for testing in the back-end. Most front end tools are great for simple tests, but there comes a time when you need to go a level deeper and that's not possible with front-end tools.
I wasn't the person who selected Usertesting, but I did use this in previous company so I was aware of their capabilities. I really enjoy how usertesting applies their research methods and have a greater support. The UserZoom was easy to handle but I don't remember how it was …
We evaluated a range of research tools within the UX team, including UserZoom, Lookback, Maze, Optimal Workshop, and UserTesting, and, in the end, concluded that UserTesting had the most comprehensive offer in the market. The only issue we found was that UserTesting appeared …
Best suited to conduct remote interviews that are moderated and facilitated by the interviewer/researcher.
Not the best if you want to do it unmoderated, there are much more sophisticated tools out there. Unfortunately, for a design research team that does both these kids of research, it can be hard to get budgets to get two softwares and hence the Unmoderated Feature can seem super undercooked and doesn’t really do the job.
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
UserTesting has been great for moderated customer interviews/usability testing as well as for unmoderated testing of messaging, imagery, prototypes and live experiences. I would say that the scope of what you want needs to be limited, as the participants are only paid so much and tests are supposed to not exceed a certain amount of time. For customer interviews, I think it can be difficult to onboard customers to UserTesting if they have never used it before. If I set up interviews, I don't even have them use the UserTesting scheduling tool, I actually set up all the interviews with the customers myself through the tool (being mindful of time zones!). When we run the meeting, they really don't even know UserTesting is involved. Might be nice for UserTesting to allow the upload/connecting to of a Zoom interview and let it do the transcription/analysis from there.
It is easy to use any of our product owners, marketers, developers can set up experiments and roll them out with some developer support. So the key thing there is this front end UI easy to use and maybe this will come later, but the new features such as Opal and the analytics or database centric engine is something we're interested in as well.
Would be nice to able to switch variants between say an MVT to a 50:50 if one of the variants is not performing very well quickly and effectively so can still use the standardised report
Interface can feel very bare bones/not very many graphs or visuals, which other providers have to make it a bit more engaging
Doesn't show easily what each variant that is live looks like, so can be hard to remember what is actually being shown in each test
Sometimes there are restrictions around types of research that can be used for moderated user-testing with our own users.
For tests on relatively small areas of a website or app, the AI analysis seems rather overblown, like it's trying too hard to come up with something insightful when the test is actually about something quite small (e.g. structure of a mobile app menu).
It's difficult to invite our own users to unmoderated user-testing because they wouldn't know how the UserTesting interface works - this is particularly an issue for mobile research.
I'm very happy with my experience of the product and the level of service and learning resources they provide. If the service becomes more expensive than it currently is then we might not be able to justify additional cost - but this is theoretical. I would recommend UserTesting and would ideally renew our contract.
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
It's very good, I have used other tools in the past and this is by far the most intuitive and user friendly. Testament to this is the ease with which other non researchers who have been onboarded to the tool with our additional seat have found it easy to use
I have contacted UserTesting's customer service online, by email, or by phone a few times, and each time, I have encountered the same professionalism and expertise. Even in person during a work event, they were there, and it was the same experience.
From a technical perspective, the implementation was extremely smooth. Most of the change management / implementation hurdles were clearing use of the tool through our various security, legal, and information privacy teams. Once these concerns were addressed (UserTesting.com was very helpful in providing all the needed documentation), the implementation process was very simple and we were able to get going right away.
Zoom was way more expensive and it o is designed to other things apart from just running qualitative interviews. It also requires a different kind of approval and different approval processes to go through when trying to get it simply for qualitative research purposes.
Lookback records, scribes, helps observe and provides a sentiment check as well in the price that it does
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
The quality of the participants: they usually have good feedback and act like "professional" users. Which is good when we want a few insights in a short amount of time. Also, the interface is good. I miss having more features, like a good transcription tool like we have in Condens
We have a huge, noteworthy ROI case study of how we did a SaaS onboarding revamp early this year. Our A/B test on a guided setup flow improved activation rates by 20 percent, which translated to over $1.2m in retained ARR.