Using the Optimizely API to avoid flashes of unstyled content in A/B tests

We use Optimizely on the Mozilla Developer Network to build and analyze split experiments. We find that the tool helps us move forward with confidence, understanding how changes affect user behavior. We care about accuracy and user experience, so we were concerned when we discovered a problem in a recent experiment that could have affected both.

Optimizely uses JavaScript to re-style page elements when testing different variations. This approach has some advantages, but also introduces one important risk. A page can render before Optimizely loads. If this happens, the page elements being tested can briefly use their default styles before being modified. For example, a button could be blue for an instant before turning green. This flash of unstyled content (FOUC) can be distracting and affect test results. When these flashes occur, users will be more likely to notice and interact with the elements that are being modified.

Thankfully, there’s a simple way to avoid these flashes: modify styles with server-side code instead.

Create an experiment as usual and add some named variations. Don’t use the Optimizely editor to change any styles. Instead, use server-side code to show different styles to different users. We use percent-based feature flags to accomplish this, but you could alternatively show different styles randomly. You don’t even need to group users evenly; Optimizely handles unequal groups just fine.

How can you compare the behaviors of the different groups? The Optimizely API provides a method for this. Use bucketVisitor to manually group visitors into appropriate variations for later review.

The end result will look something like this. This example uses Jinja, but any server-side template language will do. Remember that this just changes styles and groups visitors. You’ll still need to build and run a corresponding Optimizely experiment with named variations. Be sure to update the second and third arguments of push to use your real experiment and variation IDs. These are available in the experiment’s Diagnostic Report.

Need a real-world example? Take a look at this recent change to MDN that uses the same approach.

If you’re having trouble with flashes of unstyled content in Optimizely experiments, try this out. With just a little code, you can run completely invisible experiments without sacrificing any of the great analysis features Optimizely provides.

Share

Facebook
Google+
https://blog.openjck.com/using-the-optimizely-api-to-avoid-flashes-of-unstyled-content-in-ab-tests/">
Pinterest
LinkedIn
RSS

2 thoughts on “Using the Optimizely API to avoid flashes of unstyled content in A/B tests

  1. Raphael

    Thank you for this post, I have some questions

    Does optimizely provides a more intelligent way to get the variation you should show? (apart for a simple random) for example in Google Content Experiments they give you a “weight” value that gives more chance to a variation to show depending on its current performance.

    I assume I have to manually put a cookie for “stickness” right?

    Reply
    1. John Post author

      Hi Raphael,

      I don’t think Optimizely variations can be weighted exactly like they can in Google Content Experiments. It may be possible to use the Optimizely API to pull down information about performance and do a weight calculation manually.

      Some sort of storage will be needed to make a variation “sticky” for a visitor. We used a feature flag library on MDN, which in turn uses cookies.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *