A Focus on Social Proof: Principles, Usability and A/B Testing

Paul Ross | August 26, 2016 Comments
Social-Proof
There’s a lot of talk about the effectiveness of social proof psychology in split testing. Interestingly, the more it’s talked about the more it may be true. That’s because social evidence we hear from others in our culture persuades us to assign more credibility towards that information, whether or not it’s actually true.

According to scientist and author Robert Cialdini, we react somewhat involuntarily through thoughtless mental shortcuts and we tend to prefer them. One of these shortcuts is the reliance on social evidence. If enough people agree on something, we tend to take their collective word for it rather than investigate it for ourselves.

Our modern, complex lives are wrought with thousands of environmental and periphery details. Without these types of convenient shortcuts, or “trigger features,” the shear demand and effort to understand all the daily push and pull of information would create an overwhelming cognitive load.

On site

Let’s take a look at how social evidence and another principle of social proof, “uncertainty,” work together during an online shopper’s journey.

According to usability research done by the Baymard Institute, users who had both little experience with a website (low domain knowledge) and its products relied almost entirely on the ratings and reviews of others compared to those with high domain knowledge. In one subject’s words they relied on the “wisdom of the crowd.” This type of “pluralistic ignorance,” as cited by social scientists, is a phenomenon whereby we blindly look to others in ambiguous situations to resolve our uncertainty.

Power in numbers

The action of social proof works best when provided by a lot of people, according to the findings of psychologist Albert Bandura. In one study with kids that were afraid of animals, he found that by showing videos of groups of children playing safely and friendly with dogs versus just one doing so, they got over their fears more quickly and permanently. He provided strong social evidence.

Like Bandura’s study, we know that there is strong correlation between the amount of reviews available on a website and the decision to purchase. We also know that users find average ratings even more trustworthy. The Baymard research revealed a strong bias towards well-reviewed products with a wide range of ratings. 

Power in numbers
This information points towards the importance of displaying not only the amount of reviews but the average rating as well.

The Test

We worked with one of our A/B testing clients, Beretta, to test out both the principle of Social Proof and the recommended website UI/UX that best conforms to that principle.

Test Description: Test to determine the effect of social proof on transactions. The ability to sort by “Average Ratings” and “Top Rated” will both be added to the sort drop down.
Test Hypothesis: By adding various ratings options to the sort drop down, we hope to increase exposure to customer product reviews, reduce uncertainty, and leverage social evidence to increase transactions.
Primary Goal: Transactions.
Secondary Goal: Ratings Sort By Engagement.
URL: http://www.berettausa.com/en-us/store/men-s-clothing/ (all product listings)

Here are the results:

Beretta Results

By offering users the ability to sort by other users’ reviews, we increased sales nearly 10%.

Note the difference in improvement between the wording “Average Rating” and “Top Rated.” Users are suspicious of ratings that are not represented as realistic or in this case, “Top Rated.” The Baymard results showed that users were far more likely to convert with an overall lower average rating as long as there were more to base it on. The users in our test had no idea how many ratings there actually were but once they are satisfied they have enough social evidence or the “Average Rating,” it was enough to make a buying decision.

These findings may also hint at some other psychological persuasion principles at work such as trigger features (as mentioned earlier) and the Reciprocity Rule. The label “Average Rating” may in itself have become trustworthy internet nomenclature that implies it was derived from many or the true average, regardless of whether there are 2 or 200 reviews. 

Take Away

The research and principles point to the importance of user-logic and the foundations of psychology over the simple reliance of arithmetic functionality alone. This kind of insight and fine-tuning needs a human touch and eyes on experience.

When developing a testing road map, it is recommended to look for these deeper meanings in your customers’ behavior and how they are truly reacting towards the UI/UX of your website.

Contact_US_CTA

Write a review

Authors

  • Ahmed Okour
  • Paul-Ross-Blogger-Bio-Pic
  • Rachel-B-Web
  • Courtney-V4
  • scottstiles
  • stu-3
  • Joseph Gustafson
  • Vince Scarlata
  • Tim Ahlenius
  • Staff Blogs
  • Shawn Griffin
  • Americaneagle Partner
  • Nick Goodrum
  • Missy Hildebrand
  • Mike Avello
  • Corte Swearingen
  • Adrian Krzeszkiewicz
  • Emily Stark