Suffice to say that even in my early career as an online marketer, I’ve been exposed to many forms of competition – from sales spiffs to rival colleagues, companies, and even various forms of online testing. But is competition inherently good for e-Business? More specifically, when you put two or more versions of your e-Commerce site against one-another, who wins – and how?

Key Takeaways:

  • Testing implies losers. It’s important to anticipate where – or what – those losses will be.
  • Optimization and risk management are not mutually exclusive.
  • Marketers should feel empowered, not helpless, during the test phase.

Who wins when you test?

Online testing hinges on one fundamental fact: you’ll have winning creative and losing creative. More specifically, in e-Commerce, one or more version(s) of your website will perform better (e.g., generate more clicks, conversions, revenue, or higher order value) than all others. This applies to segmented tests, personalization, targeting, multivariate and even split testing.

Logically speaking, what you stand to gain is a better understanding of how your visitors react to different versions of your site at one point in time. I couldn’t find the right verbiage for this benefit, so we can start by calling it the test phase observation. This observation is the desired outcome from any online test.

In order to evaluate the risk associated with online testing, ask two questions:

  1. Is your winning creative enough of a winner to outweigh the loss associated with displaying losers to your potential customers?
  2. Are the outcomes veridical? In other words, will the test phase observation translate to future real world outcomes?

These two rather straightforward questions can represent significant risk for e-Business because the winning effect of online testing (i.e., conversion or revenue per visitor lift) often does not outweigh the revenue loss associated with displaying losing creative over the entire course of the test phase. Further, the second question highlights a recurring problem related to our over-reliance on statistical confidence to eliminate the need for continuous optimization.

The confidence quagmire

We run tests. Lots of tests. In fact, we’ve been told that more testing is good for us and that a measuring stick for any testing vendor is how many simultaneous tests their customers can run (it’s not; instead, ask how much money their clients made). Testing isn’t inherently bad, per se, but the problem arises when we go to draw conclusions about our observations.

Statistical significance is not an empirical result and your confidence interval does not predict that the test phase outcome has any probability of resting in that interval given your data. Consider what your data is actually suggesting and what it almost certainly does not mean:

  • What your data likely suggests: The test phase observation is due to a real effect (difference) between each version of your website.
  • What your data does not mean: Any outcomes that you’ve observed will translate to future real-world outcomes (e.g., a +5% lift in RPV during the test phase will translate to a +5% RPV lift post-implementation).

What you win with online testing is a data point called the test phase observation. Assuming you implement the winning version(s) of your site, what you stand to lose is incremental gross merchandise volume (GMV). It’s not a gamble to take lightly.

The smartest competitors win all of the time. Here’s how:

Always be winning

Consistent growth is a competitive advantage in any industry. Wall Street bankers do it by mitigating losses and capitalizing on gains. Business-savvy enterprise brands such as National Geographic are doing it too, by way of adaptive e-Commerce optimization.

Adaptive optimization is intelligence-driven. This means that customer interactions are leveraged to predict outcomes, and consequently to change the delivery proportions of your test creative. In this way, online merchants are able to continuously display those versions of the page that are more likely to result in revenue or conversion-positive outcomes at any point in time. This allows retailers to win on both ends of the spectrum:

  1. They win by displaying more of what works, generating more revenue.
  2. They win by displaying less of what doesn’t, mitigating goal-negative outcomes.

When customer expectations change, as they always do, the display proportions are adjusted. Of course, you’ll need rock-solid technology to govern the process (plug alert: HiConversion has that covered), but the end result is sustainable revenue growth over the course of the test phase and beyond.

Ultimately, this method all but eliminates the aforementioned risks associated with online testing, both pre- and post-implementation. Whereas testing and risk mitigation are mutually exclusive in most cases, optimization and risk mitigation are not. Therefore, the hapless marketer who once needed to wait to achieve statistically-significant results before taking action is now empowered by the test phase because s/he is leveraging customer interactions to make decisions about their website in real time.

It also implies that optimization is a continuous process and that through this process, you can sustainably build GMV by remaining adaptive to changes in visitor behavior and, as a result, always win.