Maze vs. Optimizely Feature Experimentation

Overview
ProductRatingMost Used ByProduct SummaryStarting Price
Maze
Score 8.0 out of 10
N/A
Maze is a rapid user testing platform from Maze.design in Paris, designed to give users actionable user insights, in a matter of hours. The vendor states that with it, users can test remotely, autonomously, and collaboratively.
$75
per month
Optimizely Feature Experimentation
Score 7.5 out of 10
N/A
Optimizely Feature Experimentation combines experimentation, feature flagging and built for purpose collaboration features into one platform.N/A
Pricing
MazeOptimizely Feature Experimentation
Editions & Modules
Professional
$75
per month 3+ seats
Organization
custom pricing
No answers on this topic
Offerings
Pricing Offerings
MazeOptimizely Feature Experimentation
Free Trial
NoNo
Free/Freemium Version
YesYes
Premium Consulting/Integration Services
NoYes
Entry-level Setup FeeNo setup feeRequired
Additional Details
More Pricing Information
Community Pulse
MazeOptimizely Feature Experimentation
Top Pros
Top Cons
Best Alternatives
MazeOptimizely Feature Experimentation
Small Businesses
Smartlook
Smartlook
Score 8.5 out of 10
Kameleoon
Kameleoon
Score 9.5 out of 10
Medium-sized Companies
Optimizely Web Experimentation
Optimizely Web Experimentation
Score 8.7 out of 10
Kameleoon
Kameleoon
Score 9.5 out of 10
Enterprises
Optimizely Web Experimentation
Optimizely Web Experimentation
Score 8.7 out of 10
Kameleoon
Kameleoon
Score 9.5 out of 10
All AlternativesView all alternativesView all alternatives
User Ratings
MazeOptimizely Feature Experimentation
Likelihood to Recommend
8.0
(7 ratings)
7.4
(21 ratings)
Likelihood to Renew
-
(0 ratings)
8.0
(1 ratings)
Usability
-
(0 ratings)
9.0
(1 ratings)
Support Rating
10.0
(1 ratings)
-
(0 ratings)
Implementation Rating
-
(0 ratings)
10.0
(1 ratings)
Product Scalability
-
(0 ratings)
5.0
(1 ratings)
User Testimonials
MazeOptimizely Feature Experimentation
Likelihood to Recommend
Maze
Maze User Testing is great if you're interested in doing user research from the comfort of your own desk. You can easily setup usability tests, surveys, card sorting and tree tests among other things to get a better understanding of how customers use your product. The only limitation at the moment with Maze that I can identify is only being able to do unmoderated tests, so if you'd like to be able to ask follow up questions in the moment, Maze is not the tool for you.
Read full review
Optimizely
Optimizely Feature Experimentation works really well for setting up feature flags with an easy UI for turning them on and off or ramping up a gradual rollout. It also works really well to set up split tests where you can split your traffic by percentage as well as almost any custom data attribute you wish to define. This is more for robust features and less for visual changes - Optimzely Edge or Web are better suited for that.
Read full review
Pros
Maze
  • Reporting is top-tier with filtration, heatmaps, user data, and public URLs for stakeholders
  • Figma integration with user testing software is about as fast as it gets
  • The experience for testers is practically seamless going from our site to a Maze. Loads of completed Mazes.
Read full review
Optimizely
  • Its ability to run A/B tests and multivariate experiments simultaneously allows us to identify the best-performing options quickly.
  • Optimizely blends into our analytics tools, giving us immediate feedback on how our experiments are performing. This tool helps us avoid interruptions. With this pairing, we can arrive at informed decisions quickly.
  • Additionally, feature toggles enable us to introduce new features or modifications to specific user groups, guaranteeing a smooth and controlled user experience. This tool helps us avoid interruptions.
Read full review
Cons
Maze
  • Change/Audit log to understand who is doing what and when
  • Some simpler templates for simpler situations
  • Additional means to take data out into 3rd party products for advanced analytics
Read full review
Optimizely
  • Splitting feature flags from actual experiments is slightly clunky and can be done either as part of the same page or better still you can create a flag on the spot while starting an experiment and not always needing to start with a flag.
  • Recommending metrics to track based on description using AI
Read full review
Usability
Maze
No answers on this topic
Optimizely
All features that we used were pretty clear. They have a good documentation
Read full review
Support Rating
Maze
Any issues that presented themselves were dealt with in a quick and efficient manner and fully rectified by the knowledgeable team over at Maze.
Read full review
Optimizely
No answers on this topic
Implementation Rating
Maze
No answers on this topic
Optimizely
It’s straightforward. Docs are well written and I believe there must be a support. But we haven’t used it
Read full review
Alternatives Considered
Maze
A Lookback is an alternative option if you think Maze User Testing is quite expensive for you, but look back has a different approach to Maze User Testing. Lookback focuses on qualitative usability testing instead of quantitative UserTesting. And also, Maze User Testing has a free option but Lookback doesn't have it, but Lookback has a cheaper option at $19/month than Maze.
Read full review
Optimizely
Optimizely Feature Experimentation is better for building more complex experiments than Optimizely Web. However, Optimizely Web is much easier to kickstart your experimentation program with as the learning curve is much lower, and dedicated developer resources are not always necessary (marketers can build experiments quickly with Optimizely Web without developers' help).
Read full review
Scalability
Maze
No answers on this topic
Optimizely
had troubles with performance for SSR and the React SDK
Read full review
Return on Investment
Maze
  • Easy to run quant test
  • Easy to test with large number of people on production
  • Easy to run unmoderated competitor studies
Read full review
Optimizely
  • Experimentation is key to figuring out the impact of changes made on-site.
  • Experimentation is very helpful with pricing tests and other backend tests.
  • Before running an experiment, many factors need to be evaluated, such as conflicting experiments, audience, user profile service, etc. This requires a considerable amount of time.
Read full review
ScreenShots

Maze Screenshots

Screenshot of Maze

Optimizely Feature Experimentation Screenshots

Screenshot of AI Variable suggestions: AI helps to develop higher quality experiments. Optimizely’s Opal suggests content variations in experiments, and helps to increase test velocity  and improve experiment qualityScreenshot of Integrations: display of the available integrations in-app.Screenshot of Reporting used to share insights, quantify experimentation program performance using KPIs like velocity and conclusive rate across experimentation projects, and to drill down into the charts and figures to see an aggregate list of experiments. Results can be exported into a CSV or Excel file, and KPIs can be segmented using project filters, experiment type filters, and date rangesScreenshot of Collaboration: Centralizes tracking tasks in the design, build, and launch of an experiment to ensure experiments are launched on time . Includes calendar, timeline, and board views in customizable views that can be saved to share with other stakeholdersScreenshot of Scheduling: Users can schedule a Flag or Rule to toggle on/off,  traffic allocation percentages, and achieve faster experimentation velocity and smoother progressive rolloutsScreenshot of Metrics filtering: Dynamic event properties to filter through events. Dynamic events provide better insights for experimenters who can explore metrics in depth for more impactful decisions