Many companies implement split testing into their digital marketing plan as way to obtain real feedback from their users before implementing a change on their website. Optimization teams are tasked to identify main KPIs (key performance indicators) and building strategies around the KPIs to improve them over time. Most of the time, split testing will end in inconclusive results. And, inconclusive results happen when an experiment results in “flat” or neutral data that does not provide a clear “winner” or “loser.” When an experiment comes back as inconclusive, there can be many reasons for these results as well as many valuable pieces of information that will inform your next iterations.
This article dives into possible causes of inconclusive results, what you can do with inconclusive test results, and how you can use the valuable data to inform your next set of experiment iterations.
An experiment has been live for a substantial amount of time and the results show no improvement. All of the key metrics being measured are flat – now what? Don’t fret! Although at first glance, it may seem like the experiment was a waste of time and resources. It wasn’t. Actually, quite the opposite is true. There are loads of great tidbits within the data that has been collected.
The first step in dissecting the data that has been collecting would be to segment it. Reviewing your data with a fine tooth comb based upon different segments will reveal more information that will help to generate iterations. If a test was run that rearranged how a product detail page displayed, segment the data by device. Look at the data by desktop, mobile, and tablet. Are there differences in metrics in how mobile users navigate the variation versus how desktop users navigate the page?
Another important step in segmenting data is to make sure that outlier data has been filtered out of the results. This is a very important step that can often be overlooked. Removing outliers is crucial to be able to analyze your data properly and to allow for your experiments to reach statistical significance. The chart below does a great job with detailing the importance of statistical significance and outliers within an experiment. Without removing outliers from an experiment, it can give a false results. As seen in below chart, when the outliers are removed, it can often lead to regressing back to the mean which may be classified as an “indeterminate” result. Although we may not have a “winner,” this is still vital to understanding the impact of this type of test on the website and users.
Significance graph for A/B testing. Americaneagle.com
Removing Outside Influences & Biases
When reviewing data from an inconclusive test, analysts need to make sure that there were no other outside influences that may have affected the results of the test. Outside influences can play a major role in the outcome of an experiment. Influences such as promotions, sales, and special offerings are all examples of elements that when introduced during the lifespan of an experiment can contaminate the results. If a sale was happening on the same product that was being used in the experiment, there would be no way to know the impact on the results unless you ran the same test over without sale at play.
Exclusion groups are another way to avoid cross contaminating data. Exclusion groups is a way experimenters can keep the data clean when multiple experiments are in motion at once. Exclusion groups helps to ensure that users are not exposed to multiple, live experiments to uphold data integrity. By creating exclusion groups, you can dictate how much traffic should be allocated to each experiment, and most importantly, keep the data separate.
Use Qualitative Analytics Tools
Many optimization specialists will be using quantitative analytics tool along with the testing platform for measuring their data for each experiment. This is a great start to learn how your experiments are performing. It enables you to make informed decisions based upon the data collected. However, this could be taken a step further by integrating qualitative analytics tools. An example of a qualitative analytics tool is a user behavior tool that records users within a variation of an experiment. This functionality is included with some of the testing platforms which is an extremely valuable addition. Analytics and data can only tell so much, but the ability to watch how users are interacting with a test is invaluable. These user recordings can reveal why a test failed or why it’s inconclusive. Sure, the analytical data will also show these results, but the recordings will provide another layer of data that fills in the gaps.
For example, a test was run to add a sticky “Submit Order” button on the last step in checkout. The hypothesis was that if a sticky checkout button was added to the last step, it will increase “Place Order” clicks, increase purchases, and increase revenue. The analytical data revealed that the variation, which included the sticky “Submit Order” button, was flat across all of the key metrics previously identified. The team dissected the data, looking at segments and other variables that help to tell the story but there was no obvious flaggers as to why this test was inconclusive. Upon looking at user recordings for the variation, it appeared that most users were clicking on the fixed “Submit Order” button as this was positioned next to the order details. Users like to confirm their order – review items being purchased and most importantly order total. With this vital piece of information, the team was able to iterate off of this test based upon information learned through recordings. The iteration was to add the “Order Total” in the sticky “Submit Order” button. Without recordings, this piece of information could have been easily overlooked.
It’s true, inconclusive test results can be frustrating. It’s important to know that these are extremely common and constructive for building testing roadmaps. A majority of tests that are run will most likely result in “inconclusive” data. Inconclusive tests are needed to know what works and does not work for the website and its users. It’s the inconclusive results that pave the way for experimenters to build the winning tests and why one could argue these types of tests are as important as winning tests. The next time an inconclusive test turns up, get excited because there are sure to be nuggets of gold just around the corner!
How does A/B testing fit into your digital marketing strategy? If you’re unsure of how to answer that question, let us help you. Our team of digital marketing professionals have the knowledge and expertise to test each facet of your website to ensure it’s giving you the most opportunities for conversions. Contact us today to learn how we can help you.