Maze vs. Optimizely Feature Experimentation

Overview
ProductRatingMost Used ByProduct SummaryStarting Price
Maze
Score 8.3 out of 10
N/A
Maze is a rapid user testing platform from Maze.design in Paris, designed to give users actionable user insights, in a matter of hours. The vendor states that with it, users can test remotely, autonomously, and collaboratively.
$75
per month
Optimizely Feature Experimentation
Score 7.8 out of 10
N/A
Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.N/A
Pricing
MazeOptimizely Feature Experimentation
Editions & Modules
Professional
$75
per month 3+ seats
Organization
custom pricing
No answers on this topic
Offerings
Pricing Offerings
MazeOptimizely Feature Experimentation
Free Trial
NoNo
Free/Freemium Version
YesYes
Premium Consulting/Integration Services
NoYes
Entry-level Setup FeeNo setup feeRequired
Additional Details
More Pricing Information
Community Pulse
MazeOptimizely Feature Experimentation
Best Alternatives
MazeOptimizely Feature Experimentation
Small Businesses
Smartlook
Smartlook
Score 8.4 out of 10
GitLab
GitLab
Score 8.6 out of 10
Medium-sized Companies
Optimal Workshop
Optimal Workshop
Score 9.2 out of 10
GitLab
GitLab
Score 8.6 out of 10
Enterprises
Optimal Workshop
Optimal Workshop
Score 9.2 out of 10
GitLab
GitLab
Score 8.6 out of 10
All AlternativesView all alternativesView all alternatives
User Ratings
MazeOptimizely Feature Experimentation
Likelihood to Recommend
6.1
(8 ratings)
8.0
(44 ratings)
Likelihood to Renew
-
(0 ratings)
4.6
(2 ratings)
Usability
-
(0 ratings)
7.7
(23 ratings)
Support Rating
10.0
(1 ratings)
3.6
(1 ratings)
Implementation Rating
-
(0 ratings)
10.0
(1 ratings)
Product Scalability
-
(0 ratings)
5.0
(1 ratings)
User Testimonials
MazeOptimizely Feature Experimentation
Likelihood to Recommend
Maze
Maze User Testing is great if you're interested in doing user research from the comfort of your own desk. You can easily setup usability tests, surveys, card sorting and tree tests among other things to get a better understanding of how customers use your product. The only limitation at the moment with Maze that I can identify is only being able to do unmoderated tests, so if you'd like to be able to ask follow up questions in the moment, Maze is not the tool for you.
Read full review
Optimizely
Based on my experience with Optimizely Feature Experimentation, I can highlight several scenarios where it excels and a few where it may be less suitable. Well-suited scenarios: - Multi-Channel product launches - Complex A/B testing and feature flag management - Gradual rollout and risk mitigation Less suited scenarios: - Simple A/B tests (their Web Experimentation product is probably better for that) - Non-technical team usage -
Read full review
Pros
Maze
  • Reporting is top-tier with filtration, heatmaps, user data, and public URLs for stakeholders
  • Figma integration with user testing software is about as fast as it gets
  • The experience for testers is practically seamless going from our site to a Maze. Loads of completed Mazes.
Read full review
Optimizely
  • Splitting traffic between variants and enabling you to scale up or down the amount of traffic in each one
  • Giving a standardised report that you can share with a huge number of users
  • Showing a large variety of results/metrics you can then dive into
Read full review
Cons
Maze
  • Change/Audit log to understand who is doing what and when
  • Some simpler templates for simpler situations
  • Additional means to take data out into 3rd party products for advanced analytics
Read full review
Optimizely
  • Difficult integration if your data is not front end
  • Costly MAU model needs to be based on experiments not on site visits
  • It's not easy to understand how to build an Experiment
  • Onboarding team is more focused on punching through their slides and not focused on your needs or understanding.
Read full review
Likelihood to Renew
Maze
No answers on this topic
Optimizely
Competitive landscape
Read full review
Usability
Maze
No answers on this topic
Optimizely
Easy to navigate the UI. Once you know how to use it, it is very easy to run experiments. And when the experiment is setup, the SDK code variables are generated and available for developers to use immediately so they can quickly build the experiment code
Read full review
Support Rating
Maze
Any issues that presented themselves were dealt with in a quick and efficient manner and fully rectified by the knowledgeable team over at Maze.
Read full review
Optimizely
Support was there but it was pretty slow at most times. Only after escalation was support really given to our teams
Read full review
Implementation Rating
Maze
No answers on this topic
Optimizely
It’s straightforward. Docs are well written and I believe there must be a support. But we haven’t used it
Read full review
Alternatives Considered
Maze
A Lookback is an alternative option if you think Maze User Testing is quite expensive for you, but look back has a different approach to Maze User Testing. Lookback focuses on qualitative usability testing instead of quantitative UserTesting. And also, Maze User Testing has a free option but Lookback doesn't have it, but Lookback has a cheaper option at $19/month than Maze.
Read full review
Optimizely
When Google Optimize goes off we searched for a tool where you can be sure to get a good GA4 implementation and easy to use for IT team and product team. Optimizely Feature Experimentation seems to have a good balance between pricing and capabilities. If you are searching for an experimentation tool and personalization all in one... then maybe these comparison change and Optimizely turns to expensive. In the same way... if you want a server side solution. For us, it will be a challenge in the following years
Read full review
Scalability
Maze
No answers on this topic
Optimizely
had troubles with performance for SSR and the React SDK
Read full review
Return on Investment
Maze
  • Easy to run quant test
  • Easy to test with large number of people on production
  • Easy to run unmoderated competitor studies
Read full review
Optimizely
  • We have improved various metrics throughout the course of our experimentation program with Optimizely and therefore sharing numbers is tricky. Essentially we only implement versions of the product that perform the best in terms of CVR, revenue/visitor, ATV, average order value, average basket size and so forth dependent on the north star we are trying to move with each release.
Read full review
ScreenShots

Maze Screenshots

Screenshot of Maze

Optimizely Feature Experimentation Screenshots

Screenshot of Feature Flag Setup. Here users can run flexible A/B and multi-armed bandit tests, as well as:

- Set up a single feature flag to test multiple variations and experiment types
- Enable targeted deliveries and rollouts for more precise experimentation
- Roll back changes quickly when needed to ensure experiment accuracy and reduce risks
- Increase testing flexibility with control over experiment types and delivery methodsScreenshot of Audience Setup. This is used to target specific user segments for personalized experiments, and:

- Create and customize audiences based on user attributes
- Refine audience segments to ensure the right users are included in tests
- Enhance experiment relevance by setting specific conditions for user groupsScreenshot of Experiment Results, supporting the analysis and optimization of experimentation outcomes. Viewers can also:

- examine detailed experiment results, including key metrics like conversion rates and statistical significance
- Compare variations side-by-side to identify winning treatments
- Use advanced filters to segment and drill down into specific audience or test dataScreenshot of a Program Overview. These offer insights into any experimentation program’s performance. It also offers:

- A comprehensive view of the entire experimentation program’s status and progress
- Monitoring for key performance metrics like test velocity, success rates, and overall impact
- Evaluation of the impact of experiments with easy-to-read visualizations and reporting tools
- Performance tracking of experiments over time to guide decision-making and optimize strategiesScreenshot of AI Variable Suggestions. These enhance experimentation with AI-driven insights, and can also help with:

- Generating multiple content variations with AI to speed up experiment design
- Improving test quality with content suggestions
- Increasing experimentation velocity and achieving better outcomes with AI-powered optimizationScreenshot of Schedule Changes, to streamline experimentation. Users can also:

- Set specific times to toggle flags or rules on/off, ensuring precise control
- Schedule traffic allocation percentages for smooth experiment rollouts
- Increase test velocity and confidence by automating progressive changes