Tricky for Testers Who Don't Read the Manual
October 12, 2019

Tricky for Testers Who Don't Read the Manual

Anonymous | TrustRadius Reviewer
Score 6 out of 10
Vetted Review
Verified User

Overall Satisfaction with Optimizely

We use Optimizely mainly within our development group, though the framework is available to all departments. Its main function is to enable us to show/hide features to subsets of users in the field, and make those changes via server-side controls. We're also starting to use it for cohort testing, though we're still in the early stages of trying this.
  • Developers seem to be able to set up experiments fairly easily.
  • The dashboard allows users to control settings for individual devices for testing purposes.
  • Functional on both iOS and Android.
  • At least in our implementation, there didn't seem to be a great way to scale the number of individual testers who are able to individually control experiment settings.
  • Differences in the experiment stage weren't terribly intuitive to figure out.
  • When we first started using it, there was no support for booleans to define the audience. If we wanted to enable an experiment for anyone on version N or higher, we couldn't just say "if version >= N", but had to add each subsequent version individually. I believe this has been addressed in the new version.
  • The development has had additional flexibility because of the ability to dynamically enable or disable the feature on the fly.
  • We've been able to change our risk calculations, and release features more aggressively than before because of the ability to ramp up exposure to ensure no problems or revert if a problem is discovered after release.
I wasn't involved in evaluating any other third-party tools of this type.
It seems like it has most of the functionality you'd want in an A/B testing tool. However, it also seems like there are some implementation expectations built into the functionality. I suspect that some of the difficulties we've had getting up to speed have been due to our usage not adhering to the intended usage pattern. I'm not sure if we diverted intentionally by choice, by necessity, or because of insufficient guidance.

Optimizely Web Experimentation Feature Ratings

a/b experiment testing
5
Split URL testing
Not Rated
Multivariate testing
7
Multi-page/funnel testing
Not Rated
Cross-browser testing
Not Rated
Mobile app testing
5
Test significance
5
Visual / WYSIWYG editor
Not Rated
Advanced code editor
Not Rated
Page surveys
Not Rated
Visitor recordings
Not Rated
Preview mode
Not Rated
Test duration calculator
Not Rated
Experiment scheduler
6
Experiment workflow and approval
6
Dynamic experiment activation
Not Rated
Client-side tests
6
Server-side tests
Not Rated
Mutually exclusive tests
Not Rated
Standard visitor segmentation
6
Behavioral visitor segmentation
Not Rated
Traffic allocation control
Not Rated
Website personalization
Not Rated
Heatmap tool
Not Rated
Click analytics
Not Rated
Scroll maps
Not Rated
Form fill analysis
Not Rated
Conversion tracking
Not Rated
Goal tracking
Not Rated
Test reporting
Not Rated
Results segmentation
Not Rated
CSV export
Not Rated
Experiments results dashboard
Not Rated

Using Optimizely

Based on just attempting to parse the dashboard without additional training or documentation, a number of things weren't obvious. We weren't sure whether an experiment needed to be "on" or "paused" in order to do in-house testing before release, for example. I think the new version has cleaned up a lot of these confusions though.

Optimizely Reliability

As mentioned, our implementation didn't seem to support more than ten testers being able to enable/disable experiments from within the dashboard. From a test perspective, this is a mandatory feature. It's possible that the tool supports this usage, but our implementation or training got in our way when attempting to figure it out.