LaunchDarkly vs. Optimizely Feature Experimentation

Overview
ProductRatingMost Used ByProduct SummaryStarting Price
LaunchDarkly
Score 7.9 out of 10
N/A
LaunchDarkly provides a feature management platform that enables DevOps and Product teams to use feature flags at scale. This allows for greater collaboration among team members, and increased usability testing before full-scale feature deployment.
$12
per month
Optimizely Feature Experimentation
Score 8.2 out of 10
N/A
Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.N/A
Pricing
LaunchDarklyOptimizely Feature Experimentation
Editions & Modules
Foundation
$12
per month per Service Connection per month, or $10 per 1k client-side MAU per mo
Enterprise
Custom
Guardian
Custom
No answers on this topic
Offerings
Pricing Offerings
LaunchDarklyOptimizely Feature Experimentation
Free Trial
YesNo
Free/Freemium Version
NoYes
Premium Consulting/Integration Services
YesYes
Entry-level Setup FeeOptionalRequired
Additional DetailsDiscount available on the Foundation plan for annual pricing.
More Pricing Information
Community Pulse
LaunchDarklyOptimizely Feature Experimentation
Considered Both Products
LaunchDarkly

No answer on this topic

Optimizely Feature Experimentation
Chose Optimizely Feature Experimentation
Optimizely Feature Experimentation is less of a point solution than LaunchDarkly, so LD has a few extra features, but Optimizely offers a much greater solution for experimentation, personalization etc.
Chose Optimizely Feature Experimentation
We selected Optimizely as it was easy to use/understand, had clearly defined SLAs for keeping the platform up and was regarded as resilient within the industry. We needed something at our point in our experimentation journey that could be used for Product testing at scale and …
Best Alternatives
LaunchDarklyOptimizely Feature Experimentation
Small Businesses
GitLab
GitLab
Score 8.6 out of 10
GitLab
GitLab
Score 8.6 out of 10
Medium-sized Companies
GitLab
GitLab
Score 8.6 out of 10
GitLab
GitLab
Score 8.6 out of 10
Enterprises
GitLab
GitLab
Score 8.6 out of 10
GitLab
GitLab
Score 8.6 out of 10
All AlternativesView all alternativesView all alternatives
User Ratings
LaunchDarklyOptimizely Feature Experimentation
Likelihood to Recommend
10.0
(28 ratings)
8.3
(48 ratings)
Likelihood to Renew
7.0
(1 ratings)
4.5
(2 ratings)
Usability
7.4
(26 ratings)
7.7
(27 ratings)
Availability
10.0
(1 ratings)
-
(0 ratings)
Performance
8.1
(26 ratings)
-
(0 ratings)
Support Rating
10.0
(1 ratings)
3.6
(1 ratings)
Implementation Rating
9.0
(1 ratings)
10.0
(1 ratings)
Configurability
8.0
(1 ratings)
-
(0 ratings)
Ease of integration
8.0
(1 ratings)
-
(0 ratings)
Product Scalability
10.0
(1 ratings)
5.0
(1 ratings)
Vendor post-sale
8.0
(1 ratings)
-
(0 ratings)
Vendor pre-sale
10.0
(1 ratings)
-
(0 ratings)
User Testimonials
LaunchDarklyOptimizely Feature Experimentation
Likelihood to Recommend
LaunchDarkly
If a new feature should be added but unsure of how it will actually work or how users will accept the new enhancement or change, this tool allows you test and measure initial results. This saves so much time and energy knowing the results before it is deployed and might have low user adoption or acceptance.
Read full review
Optimizely
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Read full review
Pros
LaunchDarkly
  • A/B or Multi Variant Testing as a methodology to gather insight from customer usage. Experimentation as a feature within LaunchDarkly offers information around the success of one variant over another and whether the experiment has reached statistical significance.
  • Being able to decouple deployment of code from the release of a feature is hugely valuable.
  • Development teams are empowered to manage features within their production applications for reliability or testing purposes.
Read full review
Optimizely
  • It is easy to use any of our product owners, marketers, developers can set up experiments and roll them out with some developer support. So the key thing there is this front end UI easy to use and maybe this will come later, but the new features such as Opal and the analytics or database centric engine is something we're interested in as well.
Read full review
Cons
LaunchDarkly
  • Limited number of users on cheaper plans that is limiting our ability to audit log who is making changes.
  • Some of our engineers are confused between flags and segments and have set up items incorrectly.
  • Better documented support for React with Typescript.
Read full review
Optimizely
  • Would be nice to able to switch variants between say an MVT to a 50:50 if one of the variants is not performing very well quickly and effectively so can still use the standardised report
  • Interface can feel very bare bones/not very many graphs or visuals, which other providers have to make it a bit more engaging
  • Doesn't show easily what each variant that is live looks like, so can be hard to remember what is actually being shown in each test
Read full review
Likelihood to Renew
LaunchDarkly
It fits out business case
Read full review
Optimizely
Competitive landscape
Read full review
Usability
LaunchDarkly
It's very easy to create new feature flags and set them properly. It is more difficult to get LaunchDarkly integrated within a distributed system so that flags can be used. Especially on stateless servers where gating features by user is not easy. Overall though, it is very easy to get started and I like how simple it is to use.
Read full review
Optimizely
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
Read full review
Reliability and Availability
LaunchDarkly
No issue with availability at all
Read full review
Optimizely
No answers on this topic
Performance
LaunchDarkly
From what I have seen, LaunchDarkly integrates well with your code and also services you might have in your tech ecosystem. We use Jenkins for automation and we were able to use it to build pipelines to automate the control of LaunchDarkly toggles in our code.
Read full review
Optimizely
No answers on this topic
Support Rating
LaunchDarkly
The overall support is very responsive
Read full review
Optimizely
Support was there but it was pretty slow at most times. Only after escalation was support really given to our teams
Read full review
Implementation Rating
LaunchDarkly
Yes I do.
Read full review
Optimizely
It’s straightforward. Docs are well written and I believe there must be a support. But we haven’t used it
Read full review
Alternatives Considered
LaunchDarkly
LaunchDarkly stood out to us because it put control of the application within the hands of our engineers. We didn't want to allow business users to manipulate the production site via a third-party tool. Instead, our focus was on delivering faster as an engineering team.
Read full review
Optimizely
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
Read full review
Scalability
LaunchDarkly
The platform didn't go down since we implemented it
Read full review
Optimizely
had troubles with performance for SSR and the React SDK
Read full review
Return on Investment
LaunchDarkly
  • Improved developer experience with some teams moving to Trunk-based Development.
  • Increased deployment frequency due to smaller code releases.
  • Validation of the technical and business value of work is achieved more quickly through smaller pieces of work and through experimenting with a small group of users before a feature gets to 100% of customers.
Read full review
Optimizely
  • We have a huge, noteworthy ROI case study of how we did a SaaS onboarding revamp early this year. Our A/B test on a guided setup flow improved activation rates by 20 percent, which translated to over $1.2m in retained ARR.
Read full review
ScreenShots

LaunchDarkly Screenshots

Screenshot of regression detection and automated incident response at the feature level. This connects critical metrics to the release process so that every change is monitored - even the smallest releases, where issues would previously have been obscured by noise in the wider system metrics.Screenshot of where track the progression of a feature flag across a series of phases, where each phase consists of one or more environments.Screenshot of how to target groups of contexts individually or by attribute. Contexts are people, services, machines, or other resources that encounter feature flags in a product.Screenshot of where to design experiments that measure business-critical user flows and provide results specific to those product funnels, and measure multi-step user journeys. This is used to determine whether conversions are succeeding, with all metrics visible in one place.

Optimizely Feature Experimentation Screenshots

Screenshot of Feature Flag Setup. Here users can run flexible A/B and multi-armed bandit tests, as well as:

- Set up a single feature flag to test multiple variations and experiment types
- Enable targeted deliveries and rollouts for more precise experimentation
- Roll back changes quickly when needed to ensure experiment accuracy and reduce risks
- Increase testing flexibility with control over experiment types and delivery methodsScreenshot of Audience Setup. This is used to target specific user segments for personalized experiments, and:

- Create and customize audiences based on user attributes
- Refine audience segments to ensure the right users are included in tests
- Enhance experiment relevance by setting specific conditions for user groupsScreenshot of Experiment Results, supporting the analysis and optimization of experimentation outcomes. Viewers can also:

- examine detailed experiment results, including key metrics like conversion rates and statistical significance
- Compare variations side-by-side to identify winning treatments
- Use advanced filters to segment and drill down into specific audience or test dataScreenshot of a Program Overview. These offer insights into any experimentation program’s performance. It also offers:

- A comprehensive view of the entire experimentation program’s status and progress
- Monitoring for key performance metrics like test velocity, success rates, and overall impact
- Evaluation of the impact of experiments with easy-to-read visualizations and reporting tools
- Performance tracking of experiments over time to guide decision-making and optimize strategiesScreenshot of AI Variable Suggestions. These enhance experimentation with AI-driven insights, and can also help with:

- Generating multiple content variations with AI to speed up experiment design
- Improving test quality with content suggestions
- Increasing experimentation velocity and achieving better outcomes with AI-powered optimizationScreenshot of Schedule Changes, to streamline experimentation. Users can also:

- Set specific times to toggle flags or rules on/off, ensuring precise control
- Schedule traffic allocation percentages for smooth experiment rollouts
- Increase test velocity and confidence by automating progressive changes