It is better to have tested and lost than never to have tested at all.

Let’s face it: when the most intuitive e-commerce optimization tests fail, it can be a knife in the heart. “What do you MEAN people don’t like that?” I’ve heard the crestfallen voices of designers say. “Look how nice it looks!”

Don’t worry – you’re not alone. I will share an example to showcase how even the most lugubrious of results from the most loved tests are beneficial.

A client of ours wanted their header/navigation to remain fixed on the page when users were on desktop. The hypothesis was that browsing on the site would be made easier when the navigation followed users scrolling up and down the page. This concept had proved positive when tested on mobile, so to them it was a no-brainer to hardcode the change on desktop as well.

Our antennae went up – after a short bit of persuasion, they agreed to test it first to ensure that their customers would enjoy the concept as much as they did.

After a week of painstakingly ensuring that the transition was smooth, the spacing was adequate, and the functionality was intact, we pushed the test live, expecting nothing but positive results.

Based on the title of this blog post, I’m sure you can guess what happened next:

Image of a graph showing control beating out the treatment in terms of RPV

The blue line is the treatment’s RPV performance, and the orange line is the control’s RPV performance

It was negative! How was it negative? The transition was sooo smooth. Oh, I know – returning users weren’t expecting a change in site functionality, right? If we check the data for new vs returning, I’m sure we’ll see that new visitors had a positive interaction…

Image of a graph showing both New and Returning users have a negative impact on RPV

This chart shows the performance of the test, separated by New vs Returning users. The blue bars show the relative lift against the control (the orange dashed line)

Yikes. New visitors reacted even more negatively than returning visitors. If this change had been hardcoded on the site, it would have had a negative impact on conversion rate – no ifs, ands, or buts about it.

The eternal question strikes again: what do we do now?

We’ve all been there. Here’s how to deal with such a loss in three simple steps:

1. Find the silver lining

Every testing “loss” is always a gain.

It’s going to be okay. In fact, you saved your company a hefty sum of money. Had you not tested, the company may have simply implemented the amazing idea the e-commerce team came up with, which would have resulted in a scary revenue loss. What’s even scarier is that most companies don’t even notice when a hardcoded change is hurting their site due to simultaneous influxes in revenue from other factors (source traffic, demand, etc).

If you recall, this particular client was gung-ho to simply implement this change on the site, but luckily saved money by testing first. Even the most intuitive, casual, not-worth-mentioning idea should be tested.

Congratulations! Though the results were negative, you made the amazing decision to test in the first place. Now, let’s look past the overall results and get into the true benefits of a negative test.

2. Find the gold lining

Data-integrated test results produce endless insights.

We covered the silver lining in step one: saving inevitable revenue loss associated with hardcoding a losing idea. Now we’ve reached the gold lining. No matter what the results of a test are – positive, negative, neutral, even upside down – every single test that is run can produce fantastic, gold-trimmed insights when you have data-integrated test results.

Consider the sticky header test. Negative overall results do not preclude you from discovering intriguing insights when you dive into the analytics. For the sake of time, we’ll only look at one of the many, many interesting facets of this test. Let’s head over to the world of behavioral analytics in the form of page path data.

Below you will find charts showing the paths of users coming and going from the cart page for both the treatment and the control, respectively.

Treatment Cart Page Path:

This chart shows the data of the page class steps going to and from the cart page. Thickness is relative to number of visitors, and color is relative to performance of visitors.

This chart shows the data of the page class steps going to and from the cart page. Thickness is relative to number of visitors, and color is relative to performance of visitors.

Control Cart Page Path:

This chart shows the data of the page class steps going to and from the cart page. Thickness is relative to number of visitors, and color is relative to performance of visitors.

This chart shows the data of the page class steps going to and from the cart page. Thickness is relative to number of visitors, and color is relative to performance of visitors.

Focusing on the left side of the charts, what I found most interesting was the rate at which users moved from page classes to the cart page. What caught my attention was not the PDP nor the category page classes (where users can add products to the cart and easily proceed to the cart page), but the increase in percentage of users who came from “home”. In the control, 10.5% of users who arrived at the cart came from the home page. When the sticky header was playing, that number increased to 12%. Why is that?

Screenshot of what the fixed navigation looks like on the home page

Screenshot of what the fixed navigation looks like on the home page

We can use deductive reasoning to come up with a reason. Here’s what we know: visitors navigated from the home page to the cart page at a higher rate when the header was sticky. It is possible that users were more likely to view the cart page when they could easily click on the cart icon in the header that was following them around the page.

There are a couple of trails to follow here: maybe the header should be sticky only after a user adds to the cart, or perhaps only the cart portion of the header should be sticky. You can see how easy it is to come up with new hypotheses based on failed test results.

3. Try, try again

Testing can become an optimization loop.

To recap: the test was negative, but we learned more about user behavior and found the potential for positivity. Now what? Do we simply hardcode an alternative version of the test based on the insights that were gleaned? No!

If there’s one thing we know, no matter how sound a data background supporting a hypothesis might seem, it is always better to test first (I’ve talked about this more extensively in a previous blog post). So, with this new information that we’ve gathered through the process of page path analysis, we can form new hypotheses and test alternative iterations­­­ of the original concept.

You can see how this is a loop of improvement – there is always something to learn from testing that you wouldn’t get from straight analytics. These insights lead to better, more customer-specific tests.

And if that doesn’t work…

With a testing tool where the data analytics are integrated, you would be hard-pressed not to find anything valuable. However, if your optimization tool doesn’t have these capabilities, or if you looked for insights but came up with nothing, that’s okay too. You made a great decision to test, but this could be a dead-end direction for your site, and now might be the time to let the project go.

It can be extremely difficult to move on from a test you loved. In hard times such as these, I think of this quote for inspiration:

"If you love something, let it go. 
If you don’t love something, definitely let it go. 
Basically, just drop everything, who cares."
                              - BJ Novack

Okay, don’t drop everything, but sometimes it’s a better investment of time to turn the stone back over to kick up another day. You can now move on to a different part of the site that will require less effort to test.

I’ll leave you with some simple “Do’s and Don’ts” to help ease those negative-test-results blues.

Don’t: Assume that results are black and white and freak out about a “negative” test.

Do: Pat yourself on the back for making the decision to test in the first place!

Don’t: Abandon a testing concept until after searching for deeper meaning.

Do: Dive deeper into the analytics and uncover behavioral data and new testing hypotheses.

Don’t: Blindly hardcode a new version of the test that you’re “pretty sure” will work.

Do: Continue testing! Customer preferences are constantly changing and your site should adapt with that.

It’s time to towel off those tears. We live to test another day!