The rapid evolution of the Internet and online capabilities heralded a new era of digital analytics. The question is: which metrics truly reveal how your site’s performance impacts your bottom line, and which are just nice to have? In order to demonstrate true lasting value, e-Commerce metrics need to “show me the money.”
- As technology evolved, so has the depth and quality of online metrics.
- Conversion rate and average order value are both insufficient performance indicators when employed independently.
- New technology enables e-Commerce optimization based on metrics that are more closely aligned with revenue.
In the beginning, there were hits
The year was 1993: the modern web was a rapidly-expending universe fighting through its adolescent years, and curious engineers, marketers, and home-brew webmasters alike, patiently sat with baited breath as server log files recorded HTML requests from client-side browsers. These calls, which we now label hits, were one of the first metrics used to gauge the performance of a website.
That same year, WebTrends (then e.g. Software), was founded. With it came the birth of commercial web analytics as a product category.
Measuring website performance in terms of the number of hits over time was quaint – and it’s still used (anecdotally) today – but as the web quickly evolved to facilitate complex experiences, it became clear that this single metric shouldn’t be the sole indicator of success.
The dawn of a new discipline: testing & optimization
When the first online business analysts were toying with eyeballs, their brick-and-mortar colleagues were using well-established performance metrics such as inventory turnover ratio, sales / sq. ft., and average spend per footfall, among others. There’s no doubting it: web analytics was comparatively weak and needed to catch up.
Three years later, a host of popular premium web analytics companies were founded, and along with them came one metric that really stuck: conversion rate.
Part of the reason why conversion rate is so popular today, is that it’s simple to understand and calculate: just count the number of goals achieved on a page and divide by the number of visits to that page over a certain period of time. The resulting value could give you a rough idea of how a page was performing against a certain goal during that time window.
Java facilitated the capture of more complex values, which together represented a tremendous leap forward in the capabilities of our (still young) web analytics tools. This technology meant that companies could capture average order value (AOV), bounce rate, clicks and page views in a single session, in addition to a host of other values of mixed importance.
In 2005, we observed the launch of Google Analytics, a free tool that consolidated many of these new metrics in a platform that really captivated the marketplace and grew significant mindshare.
For the first time, business analysts had a number of metrics and tools with which they could begin to make informed business decisions about how to improve site-wide performance.
On our early obsession with conversion rate
As technology evolved, so did the depth and quality of online metrics. But one metric in particular, conversion rate, has truly captured the hearts and minds of digital marketing professionals. Its popularity is partly due to two major attributes:
- It’s easy to calculate.
- It seems straightforward enough to establish a (superficially) casual relationship.
Unfortunately, that’s where the benefits with this metric end.
Left alone, conversion rate has little depth
Conversion rate is a really shallow metric – it can’t be used alone to establish a casual relationship between effects (website performance) and page modifications on a per-element basis. In other words, the same conversion rate used to describe the number of visitors to a sub-category page who move on to a product page cannot tell you much about the effectiveness of a CTA, banner, offer, widget, or anything else as it relates to all other elements on that particular page. In this way, jumping to conclusions about page performance using conversion rate represents a weak inference.
It also supports a common logical fallacy in digital marketing
Every day, online marketers perform A/B tests, find winners, implement that winner, and test a new “challenger” against the running “champ”. Most often, the goal of choice is conversion rate; this is perhaps this metric’s biggest crime-by-association. The argument for this type of iterative testing represents a logical fallacy:
If A < B
and B < C
then A < C
In this example, there is a chance that version C converts better than version A, but because the baseline has now changed, there’s no way of telling whether you’re improving on version B, or simply moving backwards.
Conversion rate suffers from “uplift confusion”
Harry Brignull truly puts it best, so I’ll paraphrase: online testing vendors love to speak about “uplift” – the percent increase from one number to another. For example, if your conversion rate on a particular page is 0.001% (or 1 in 1,000 visitors), and you improve that number to 0.003% (or 3 in 1,000), you’ve achieved a conversion rate lift of 200%, which sounds far more impressive than “+2 in 1,000”.
Show me the money
Some e-Commerce analysts focus on metrics with stronger connections to their bottom line. By capturing average order value (AOV), e-Commerce managers and analysts alike can create goals and optimization programs that are potentially cashflow positive.
The sudden popularity of AOV optimization has led to the growth of product recommendation engines and services. Similar to conversion rate, the concept is relatively straightforward: upsell existing customers and you’ll generate more revenue per conversion. However, this metric also suffers from some significant challenges.
Is AOV a reliable predictor of revenue growth?
Close examination of these metrics points to one common trend: left alone, they are insufficient predictors of real revenue growth. Like conversion rate, a lift in average order value can only improve revenue performance under certain circumstances. To illustrate this point, check out the chart below.
This report highlights the potential for an inverse relationship between average order value and conversion rate wherein optimization efforts cause a lift in AOV but a decrease in conversions and an overall revenue loss for the client.
Overcoming an analytics impasse
With many analysts optimizing their e-Commerce sites for both conversion rate and average order value independently, it seems we’re deadlocked in an analytics impasse with no potential for real sustainable revenue growth.
To solve this dilemma, some advanced optimization vendors are capturing both average order value and conversion data to create revenue per visit (RPV). This relatively new metric is less susceptible than AOV to the same skew caused by independent, large-sum orders. Better still, RPV’s association with order value means that optimization campaigns can now be more closely attributed to real revenue growth in dollars than, say, those using conversion rate.
Where do we go from here?
The e-Commerce space is still evolving at a rapid pace. Optimization technology vendors and analytics solutions are all locked in an arms race to determine who can build the most innovative revenue growth strategies. Some are even acquiring patents for their methods.
At the moment, one of the best metrics for use in general e-Commerce website optimization is revenue per visitor, due to its strong positive relationship with both average order value and conversion rate. However, collection of these values requires intimate integration with clients’ e-Commerce sites. This means that metrics such as RPV (or in some cases AOV) are typically only available with enterprise-level analytics and optimization tools.
Companies that are measuring RPV for use in general e-Commerce optimization campaigns are at a clear advantage, since RPV is more closely aligned with real dollars and also because those who are using legacy metrics such asconversion rate or bounce rate run the risk of implementing poor performers.