ConfigCat allows the user to launch new features and change software configuration without (re)deploying code. ConfigCat SDKs enable easy integration with any web, mobile or backend applications. The ConfigCat website enables non-developers too to switch ON/OFF application features or change software configuration. This way the user can decouple feature launches and configuration from code deployment.
$0
per month
Optimizely Feature Experimentation
Score 7.8 out of 10
N/A
Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.
N/A
Pricing
ConfigCat
Optimizely Feature Experimentation
Editions & Modules
Free
$0.00
per month
Professional
$49.00
per month
Unlimited
$199.00
per month
Dedicated on-premise infra
$1499.00
per month
Dedicated hosted infra
$1499.00
per month
No answers on this topic
Offerings
Pricing Offerings
ConfigCat
Optimizely Feature Experimentation
Free Trial
Yes
No
Free/Freemium Version
Yes
Yes
Premium Consulting/Integration Services
No
Yes
Entry-level Setup Fee
No setup fee
Required
Additional Details
Fair pricing policy: All features available in all plans, even in Free. Simple and predictable prices. No hidden fees. We don't charge for team size. We don't charge for MAUs (monthly active users). Our plans only differ in limitations.
If you are looking for an experimentation/feature flag style tool that is quick to adopt and provides enough functionality for light/medium use cases, then this is the tool for you. Additionally, they are growing and expanding their functionality and feature set so they can grow alongside you and your needs. The publicly accessible roadmap is also a great benefit to see where time is being spent on which feature next.
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
They have a community Slack channel that is open to anyone. They always seem to have people in there, even over the weekends and are always happy to answer any questions you have,
At iBinder we searched for and vetted several suppliers of a feature toggle service to handle feature toggling in our production environment. In addition to our functional requirements, it was crucial for us to find a partner that could deliver an EU-compliant service. We finally decided to sign a service agreement with ConfigCat. This has been a real success story for us – in addition to being compliant, ConfigCat delivers an amazing, flexible, and reliable service. They continue to impress by also being very transparent and having a fantastic support and they are very solution oriented and accommodating when it comes to our feature requests etc. We have now used ConfigCat for approximately 2 years and we give our warmest recommendations to anyone who needs a stable, reliable and EU-compliant feature toggle service.
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
Allowed us to migrate seamlessly from a major customer communication system to another, reducing end-user friction and production bugs by being able to turn features off if they didn't work as intended.
We went from zero experimentation to running 10-20 experiments concurrently across systems. Engineering teams are thinking in an experimentation mindset.
We have improved various metrics throughout the course of our experimentation program with Optimizely and therefore sharing numbers is tricky. Essentially we only implement versions of the product that perform the best in terms of CVR, revenue/visitor, ATV, average order value, average basket size and so forth dependent on the north star we are trying to move with each release.