LaunchDarkly provides a feature management platform that enables DevOps and Product teams to use feature flags at scale. This allows for greater collaboration among team members, and increased usability testing before full-scale feature deployment.
$12
per month
Optimizely Feature Experimentation
Score 8.2 out of 10
N/A
Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.
N/A
Pricing
LaunchDarkly
Optimizely Feature Experimentation
Editions & Modules
Foundation
$12
per month per Service Connection per month, or $10 per 1k client-side MAU per mo
Enterprise
Custom
Guardian
Custom
No answers on this topic
Offerings
Pricing Offerings
LaunchDarkly
Optimizely Feature Experimentation
Free Trial
Yes
No
Free/Freemium Version
No
Yes
Premium Consulting/Integration Services
Yes
Yes
Entry-level Setup Fee
Optional
Required
Additional Details
Discount available on the Foundation plan for annual pricing.
Optimizely Feature Experimentation is less of a point solution than LaunchDarkly, so LD has a few extra features, but Optimizely offers a much greater solution for experimentation, personalization etc.
We selected Optimizely as it was easy to use/understand, had clearly defined SLAs for keeping the platform up and was regarded as resilient within the industry. We needed something at our point in our experimentation journey that could be used for Product testing at scale and …
If a new feature should be added but unsure of how it will actually work or how users will accept the new enhancement or change, this tool allows you test and measure initial results. This saves so much time and energy knowing the results before it is deployed and might have low user adoption or acceptance.
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
A/B or Multi Variant Testing as a methodology to gather insight from customer usage. Experimentation as a feature within LaunchDarkly offers information around the success of one variant over another and whether the experiment has reached statistical significance.
Being able to decouple deployment of code from the release of a feature is hugely valuable.
Development teams are empowered to manage features within their production applications for reliability or testing purposes.
It is easy to use any of our product owners, marketers, developers can set up experiments and roll them out with some developer support. So the key thing there is this front end UI easy to use and maybe this will come later, but the new features such as Opal and the analytics or database centric engine is something we're interested in as well.
Would be nice to able to switch variants between say an MVT to a 50:50 if one of the variants is not performing very well quickly and effectively so can still use the standardised report
Interface can feel very bare bones/not very many graphs or visuals, which other providers have to make it a bit more engaging
Doesn't show easily what each variant that is live looks like, so can be hard to remember what is actually being shown in each test
It's very easy to create new feature flags and set them properly. It is more difficult to get LaunchDarkly integrated within a distributed system so that flags can be used. Especially on stateless servers where gating features by user is not easy. Overall though, it is very easy to get started and I like how simple it is to use.
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
From what I have seen, LaunchDarkly integrates well with your code and also services you might have in your tech ecosystem. We use Jenkins for automation and we were able to use it to build pipelines to automate the control of LaunchDarkly toggles in our code.
LaunchDarkly stood out to us because it put control of the application within the hands of our engineers. We didn't want to allow business users to manipulate the production site via a third-party tool. Instead, our focus was on delivering faster as an engineering team.
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
Improved developer experience with some teams moving to Trunk-based Development.
Increased deployment frequency due to smaller code releases.
Validation of the technical and business value of work is achieved more quickly through smaller pieces of work and through experimenting with a small group of users before a feature gets to 100% of customers.
We have a huge, noteworthy ROI case study of how we did a SaaS onboarding revamp early this year. Our A/B test on a guided setup flow improved activation rates by 20 percent, which translated to over $1.2m in retained ARR.