Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.
+ I strongly believe that this tool helps when a firm has good user count (depends on business model) as most of these tools are data friends. More data - more valuable insights+ Best fit if someone who is looking for deeper insights of individual page - Not suggested for very fewer visits of a website. Suggested toimprove better visit count
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Provides heatmaps that shows you the elements on your site that are and aren't performing well.
Provides scrollmaps so you can see how far down a page users are scrolling and which content never gets seen.
Screenshots show you how your website looks across a variety of different devices.
Provides a type of clickmap called confetti that enables you visualise clicks by segments - device, new/returning visitors, campaigns and other metrics.
It is easy to use any of our product owners, marketers, developers can set up experiments and roll them out with some developer support. So the key thing there is this front end UI easy to use and maybe this will come later, but the new features such as Opal and the analytics or database centric engine is something we're interested in as well.
The largest thing we've struggled with is the Optimizely integration. I've contacted customer service a few times to get it properly setup. Customer Service is always friendly and helpful; they provide clear steps to get it setup. Unfortunately despite clear instructions, they are tedious, and if not completed in the correct order, the integration with Optimizely does not work. My success rate with the integration is less than 55%.
Would be nice to able to switch variants between say an MVT to a 50:50 if one of the variants is not performing very well quickly and effectively so can still use the standardised report
Interface can feel very bare bones/not very many graphs or visuals, which other providers have to make it a bit more engaging
Doesn't show easily what each variant that is live looks like, so can be hard to remember what is actually being shown in each test
It's a great tool considering how inexpensive it is. If used correctly and you have a plan for tracking your websites, this tool can make a world of a difference. If you are not going to sit down and take the time to make a plan for how to use this tool, I would say it is not worth your time. Yes, you can look at items on your website that need to be changed, but without a consistent plan, other important items that need changing can be lost in the mix. Make sure you have enough time and energy to invest in this and it will be well worth it
Crazy Egg is extremely easy to set up and use, and very well done from a user experience standpoint. It is really helpful that I can give stakeholders access to the interface and get them interacting with it with minimal training. The A/B testing is the easiest I have ever used, with minimal performance impact to the website.
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
It's slow to post data, and slow to get a snapshot to finally be active (i.e. not pending). Not intolerable, but would be nice to see data within a couple hours. Often have to wait to the next day.
I think support is an area where Crazy Egg is lacking. I would love to have a quarterly check-in with a Crazy Egg rep to understand what kinds of changes have been made to the platform and what is on the horizon. I also think a quick consulting sessions with a rep could be extremely beneficial, as I'm sure there are ways to use the tool that we haven't even thought about yet that would be extremely insightful for our team.
I will say that I didn't evaluate or select Crazy Egg, it's been a legacy tool that has been at the company before me. Honestly, we're not even sure of all of the features/functionality that we can use. Me, as a UXR, I think there are some other tools that would help me more in gaining visibility into what our users are doing on our website. I've evaluated other tools that are more aligned with UXR. However, if we properly paired it with experimentation, this might be more of a valuable tool for us.
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
Its reliability (not scaleability, as the question asks for, sorry) is pretty good but through our testing we know that some clicks do not get recorded. It doesn't bother us a lot because we look at the aggregate of thousands of visits, but we do know it misses things. As for scaleability, it's about right. You really don't want zillions of clicks per snapshot - the screen just turns to 100% dots and you lose the ability to differentiate different screen areas. We find that 25,000 clicks for a page gives us a really good view.
We have a huge, noteworthy ROI case study of how we did a SaaS onboarding revamp early this year. Our A/B test on a guided setup flow improved activation rates by 20 percent, which translated to over $1.2m in retained ARR.