E-Commerce

Stop Experimenting on Your Customers!

As eBusiness drives more business, it is attracting more sophisticated marketers who bring with them classic training in marketing research methods from places like P&G. These marketers understand how critical customer research is in guiding their decisions in product development, messaging and pricing, and in helping them to predict the success of a particular line of business.

Experimental field-testing, commonly referred to as A/B testing, is one of the most powerful techniques a marketer has to guide decision-making. In principle, A/B testing should bring the same kind of power to Web businesses that are trying to optimize their site designs.

However, experimenting in the field (that is, on real customers) with an eBusiness Web site, can be risky. Companies should consider permission A/B testing, which provides even more powerful insights and avoids the disadvantages of field testing.

Downside for eBusiness

The general A/B testing technique is simple: randomly assign potential customers to experience one of a few scenarios, and then evaluate how well each alternative performs in terms of customer outcomes. Results from such experiments are highly predictive and can decisively show which scenario drives better business outcomes. Who can argue for version A, if version B produced the most revenue? Decisions become a matter of looking at the facts, rather than a matter of opinion.

Marketers trained in A/B field testing techniques in the offline world naturally might think of applying it to optimizing their Web sites.

Some eBusinesses have employed A/B field testing for optimizing their Web designs. Multiple versions of the site are deployed live, and customers who come to the company’s site are randomly directed, unbeknown to them, to one of these versions. The performance of each version is then compared in terms of desired customer behavior — revenue, lead generation, advertising click-throughs — or other critical business outcomes of interest.

Unfortunately, there are some downsides and limitations to A/B field testing in its most pure form for eBusiness. For sites driving significant volume or trying to gain market share, the stakes are too high to be experimenting on real customers. Real revenue might be lost in poorly performing versions.

Experimenting on customers, if discovered, can create distrust for the brand.

Deploying multiple customer-ready prototype versions of the site can be expensive. In order to experiment on customers, the alternative versions of a site must be completely finalized and customer ready. However, many concepts can be tested and eliminated at early stages of design before making an investment in building out the concept.

New Testing Approach

A/B field testing for eBusiness can only show you the “what” and not the “why.” For example, a field experiment might show that there was little difference in customer purchases between a “free shipping” offer and a “25 percent off” offer. However, the field experiment cannot tell you why there was little difference. In this example, the two offers might have produced similar customer behavior because people didn’t notice the offers.

Fortunately, a new approach has been developed to provide the power of A/B field testing without the negatives of experimenting on customers without their knowledge. Permission A/B testing methods provide the power of A/B field testing with the added bonuses of providing explanations for customer behavior and eliminating the risk of alienating customers.

In this approach, customers are intercepted from a Web site and asked if they will participate in a customer experience test. They are then randomly assigned to competing versions of a Web design.

Because customers know they are participating in an experiment, participants can be asked to disclose their purpose and goals for visiting the site and then told to pursue these goals as part of the experiment. Because the participants’ goals and intentions are known, their behavior can be easily interpreted. Futhermore, more information can be gained from participants, such as their backgrounds, expectations, and opinions regarding the site experience.

The advantage of this permission-based customer experience testing approach is that it combines the power of experimental A/B testing with the additional power of capturing attitudes and behavior at the point of interaction with the media. Thus, marketers can confidently predict the most successful design using both observed outcomes and insights into customer attitudes.

Longer-Term Impact

Understanding attitudes as well as behavior can help determine the longer-term business impact of a site design.

For example, one version of a site’s presentation of advertising might create greater click-through behavior than another version. Based on observed customer behavior only, it would appear that this version will be the most successful in driving advertising revenue. However, in permission-based customer experience testing, the experiment might show a more revealing result. Although customers clicked on the advertisements because of their placement in the winning site design, they did not like the ads, and they don’t plan to come back in the future.

Thus the “winning site” only looks superior in the short term. In the longer term, the site designs that produced the happiest customers and higher traffic volume will drive more revenue, even if each customer clicks on fewer ads. In this case, A/B field testing that looks only at customer behavior without understanding attitudes is misleading.

Another advantage to asking explicit questions of participants during the experiment is that their responses suggest what kinds of changes might lead to improvements as well as inform hypotheses for further testing. Field tests, on the other hand, provide information about which design performs better than others being tested, but do not generate ideas for further improvement.

With permission-based customer experience testing, earlier-stage prototypes can be tested. Because customers agree to participate in a study — whether they are intercepted on the site or recruited from a research panel — their expectations can be set appropriately. Therefore, the study can be used to compare customer reactions to simple prototypes that might not be fully functional.

Scientific Practices

As an extreme example, customers can be shown a wireframe design with links that only lead to blank pages. They can be asked if they would click on these links or not, why, and what they think they will find if they do. A simple experiment such as this can eliminate several wrong turns very early in the design process.

Successful A/B Testing, whether in the field or with permission, depends on good scientific practices. In order to isolate a “cause” for why customers differed in response to different site designs, experiments must follow good scientific practices that rule out alternative explanations and provide confidence in the findings.

Critical components of any A/B test plan include:

  • Random assignment. Participants from the target population must be randomly assigned to the different versions of the site (that is, the participant or experimenter cannot choose a version for any participant). Combined with large enough sample size, random assignment ensures that the two experimental samples are roughly equal in all aspects, both observed and unobserved, and rules out any alternative explanation due to sample differences.
  • Large enough samples. Samples need to be large enough to reliably pick up any existing differences between sample populations through appropriate statistical tests. Samples that are too small are likely to find null results, even if the different versions of the site really did make a difference.
  • Experimental control. To understand whether a site design element causes customers to behave and think differently, all other aspects of the experience need to be the same. Only when all aspects of the sites are constant except the design element can differences among the sites be attributed to the differences in the design element. For example, suppose an A/B test is run to understand whether a site redesign performs better than the original one. If product prices displayed on the site were not held constant across the two versions of the design, then any differences in customer behavior might be attributed to the differences in prices, not just the design.

Don’t Experiment

The success of eBusiness will continue to attract great marketers who appreciate the power of customer research and good experimental design. By using permission-based customer experience testing, great marketers can optimize their sites without the negative impact of field testing, and with the further benefit of learning the “why” behind customer behavior.

Stop experimenting on your customers, and start using permission-based customer experience testing to get predictive insights that will help you truly optimize the online channel.


Dr. Bonny Brown is an experimental social psychologist and director of research and public services at Keynote Systems, Inc.


Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels