Be humble.
Having had the opportunity to do A/B testing on a daily basis for several years for a dozen clients, I was humbled by many of the outcomes. While the science and tools of split testing are not perfect by any means, my 20+ plus years of experience in UX/design was challenged on many occasions to explain real-time user behavior. What at times seemed like an easily predictable result was often far from it. We cleaned-up the UI and made primary CTAs standout (above the fold) and were met with lower click engagement. We had users travel to the bottom of a page 4x longer and increased conversion, at the bottom. What made the unpredictability of testing even more interesting was seeing the complete opposite effect from the same test on a different site. Sometimes the sites had nearly the same industry theme, product type, page architecture, and product funnel. It wasn’t long before I vowed I’d never assume what a user would do again. How do you move forward from this point with a sense of confidence? Well, you don’t.
Data. Lots and lots of it.
While testing, I also studied the results of tens of thousands of hours of research. This Evidence-Based Design (EBD) research data provided results from having tested hundreds to thousands of users. This aligned with A/B testing where you typically need thousands of users to obtain any type of significance on a test. What became apparent after many years of testing is that for so many improvements we were experimenting with, we already had the answer. The EBD data provided direction on the same improvements we were trying to make through testing. Equipped with this new knowledge, it appeared that maybe some of those split tests that came out so different were more impacted by other factors like poor IA, or up/downstream issues, than once thought. This takes this point even further.
Improve before you optimize.
There are many ways to approach a UX challenge. Some of these methodologies include; A/B testing, user testing, user research, behavior/competitive analysis, user flows, user journeys, prototyping, and any number and forms of discovery and validation. In so many instances, however, EBD already provides us the answers on how to improve. It’s not whether one approach is better than another. It’s about determining what the best approach is for the problem. If this is the baseline, why not establish your highest UX benchmark through EBD practice and then rely on these other tools to truly optimize the overall UX, where/when possible?
Back to significance
Where the user experience needs a truly unique perspective and/or the UI is highly customized, very granular user testing, research, and prototyping methods, etc. may be the best options. People may likely not react in predictable enough ways. However, even in these situations, “innovation” is often overdone and overwrought with opinion. We don’t have to throw away the rule book just because the rules don’t seem to apply. It may be that a great deal more UX issues should be evaluated against preexisting and empirically driven data before jumping into testing and developing journeys. We should create building blocks on solid information and then innovate to fill in the gaps. We usually need thousands or at least hundreds of users to validate a UX decision with any level of significance, as seen in testing. Developing UX, through prototyping for example, based on 5, 10, 20 users’ one-time interactions, varying skill levels, and biases, can be risky. It also tends to be more inclusive by predicating the most common use-cases. We can utilize these smaller numbers and build our UX from there - but what’s the advantage? You’re back to experimenting with low numbers and letting a lot ride on it.
Take a closer look.
Let’s also consider that taking the correct UX approach is not immune to politics. Addressing short-term concerns, filling a role, or attempting to gain client confidence through the use of methods that “sound good” - often come at a cost. Most sites and their users are unique from each other no matter how similar they may seem. And then there’s the “Watson Method” where it is suggested we just borrow UX and UI from other sites that look or seem good. Can we just borrow UX/UI without some form of validation or evidence? Maybe that did help build the internet and all of our digital things, which has afforded us all the data we need to build it correctly. If we really mean what we say when we use the term “user experience”, it’s time to start building on what we’ve learned works and stop guessing when we don’t have enough evidence to support good UX.
Contact Us Today!