Market changes and new technologies are making many of the well established testing practices obsolete if not outright wrong. Here is our attempt to outline the big ones and to help you avoid them.

Why should you care about 15 shades of A/B or Multivariate testing?

The eCommerce playbook is broken. We live in the new industrial age that Forrester Research calls ‘The Age Of the Customer.’

In this new era the only sustainable competitive advantage is knowledge of and engagement with the customer.

A/B or Multivariate testing solutions were invented to enable experimentation with copy, layout and design changes in search of treatments that will increase conversion rate.

Now testing solutions must go beyond just conversion lift and provide more effective ways to enhance customer engagement and grow customer knowledge.

This sounds like a logical and straightforward extension of the original testing mission.

Unfortunately the legacy of the original A/B or Multivariate testing mindset makes it poorly aligned with the demands of the new industrial age.


FREE A/B testing – Your HiConversion account is always free + 1 live A/B treatment at a time*

* $9.95/month for up to 10 live A/B treatments at a time – see pricing guide

#1 Testing is optional

Positive anything is better than negative nothing.
– Elbert Hubbard –

Myth: A/B or Multivariate testing is great. Unfortunately, we have to focus on other top priorities.

Story: More then 90% of all eCommerce brands are still not using any testing solutions. Furthermore, 90% of those who have a testing solution do not do much with it.

Here are some of the common ‘excuses’:

It’s obvious: It happens all the time. A CEO or an eCommerce executive is convinced that a particular site change will increase conversions. A site change is made without ever considering how to determine if it actually delivers the expected outcomes. The odds are against you. In our empirical experience, 9 out of 10 new treatments are worse than the control.

Usability study: There are agencies specialized in usability experience best practices. They will review your site to identify usability flaws and recommend improvements. Unfortunately, each brand is a unique situation. There are no guarantees that such recommendations would actually work. Instead of using usability studies as a great source of test ideas brands are just blindly implementing recommendations.  They never know if they increased conversion rate or not.

Site redesign: Every 2-3 years brands are going through major site redesign cycles. Testing initiatives are commonly delayed thinking that the next site redesign will address those issues. Unmindful that in the meantime the brand is failing to reach its full revenue potential and that 90% of the time the new site will underperform the old one. The market leaders are doing exactly the opposite. They are avoiding risky and painful site redesigns while continually testing and innovating customer experience at all times.

There are many other excuses for not testing, including not having people or resources, the site not having enough traffic, or an expectation that a product recommendation or a social gizmo is going to increase conversion rates.

Reality: A/B or Multivariate testing is the top priority.  Doing nothing to effectively engage website visitors or to grow customer knowledge is a gamble that can put you out of business.

An effective testing program will make a dramatic impact on key aspects of your business viability:

Sustainability: Growing revenue through continual increases in marketing spend is no longer sustainable. Brands are forced to pay ever increasing prices to compete for a finite pool of qualified visitors. To sustain their marketing efforts brands must find ways to increase the ROI of their marketing dollars. That’s why most successful companies are allocating a small portion of the advertisement spend on A/B and Multivariate testing.

Competitiveness:  ‘Build it and they will come’ or ‘buying online growth’ strategies are replaced by new strategies led by professionals who are engaging online visitors and using data to make decisions. A/B and Multivariate testing is on the ‘tip of the spear’ enabling introduction of new treatments and measurement of visitor reactions.

Innovation: Testing solutions are removing IT dependency. This is empowering business and marketing teams to continually experiment, learn, and improve key aspects of the business.

#2 A/B Testing is a silver bullet

My dad believed in two things: That Greeks should educate non-Greeks about being Greek and every ailment from psoriasis to poison ivy can be cured with Windex.
– My Big Fat Greek Wedding –

Myth: A/B or Multivariate testing is all we need to do to effectively compete in the The Age of The Customer.

Story: On the opposite end of the spectrum from doing nothing is to expect too much from A/B or Multivariate testing.

An expectation that testing is a silver bullet that can auto-magically solve all online customer experience problems will most likely create a disappointment that will kill a testing initiative.

There are two major challenges:

Flaws In Business Model

Sometimes a problem with low conversion rate can be much deeper than onsite customer experience due to underlying flaws within your business.

If you have product market fit for what you’re selling, then testing and customer experience optimization in general can help. If not, you’re better off tweaking your product or improving your business model before worrying too much about how to increase conversion rates.

Technical Capabilities

What every business needs is an effective customer experience optimization solution.

The challenge is that pundits and end users are often treating A/B or Multivariate testing and customer experience optimization as one and the same.

It is important to recognize that testing will help a lot but that this is not the only solution required in the quest to achieve better customer experience.

Reality: A/B or Multivariate testing is just one cog in the much larger customer experience optimization wheel.


Customer experience optimization is a very complex technical problem whose solution requires heavy duty technical capabilities:

Analytics: To repurpose a popular political slogan: It’s the data (economy) stupid! An ability to detect weak conversion spots and to prioritize which areas of the site should be tested first is essential for effective management of the testing process. Additionally, an ability to uncover how different audiences reacted to test treatments is the source of customer knowledge and ideas for continues improvements.

Algorithms: Testing is a risky and potentially slow process. Use of artificial intelligence algorithms minimizes the risk and increases the testing speed through the ability to detect and adapt to visitor preferences in real time.

Whole Experience Optimization: As soon as you try to do more than one test on your site you have to deal with interference between tests. That’s why you need a solution that can optimize across multiple tests and customer experience optimization applications to produce directly measurable end-to-end results.

Total Solution

The following is an attempt to create a map of a total customer experience optimization solution:


4D Analytics: Every brand is using at least one web analytics solution. Common challenges include incompleteness of implementation and inability to interpret data. That’s why we are introducing our 4D web analytics concept to create awareness about the need to know what is going on in every corner of your online store:

  1. Visitors: rich set of attributes that enable you to understand your audience as well as to decouple inefficiencies of demand generation from a poor onsite experience.
  2. Journey: insights into visitors consideration path with the ability to detect ‘kinks’ in the sales funnel.
  3. Content: tracking visitor engagements with the ability to measure how different page elements are impacting the overall results.
  4. Offering: linking engagements and audiences to product sales to measure product market fit.

By having 4D data organization you will be able to detect weak links in customer experience and effectively prioritize and design testing campaigns.

Virtualization: This is the ability to change website content, style, or layout without any dependency on your IT department. This is a tactical tool that can be used by eCommerce teams to manage content or to deal with challenges created by different form factors or browsers. In addition, UX teams can continually innovate look and feel as a prelude to more rigorous testing initiatives.

Personalization: There are many forms of personalization, including targeting and product recommendations. Irrespective of the form you can view personalization as a stepping stone toward discovering persuadable audiences: those who react positively to new experiences.

Testing: Most of the time brands are testing to increase conversion rates for general audience. The highest value is achieved when testing is done for persuadable audiences: maximizing the potential of those who have a propensity to buy.

Optimization: This is the most complex part where you need an advanced technology that can unify disjointed virtualization, personalization, or testing efforts and produce optimum end-to-end results. The alternative to unification is the common chaotic situation where ‘the left hand does not know what the right hand is doing.’

CX Analytics: Last but not least is the ability to learn how visitors are reacting to what you are doing to improve the buying experiences. This analytics of interaction is the source of customer knowledge and a foundation of continuous improvement.

On the practical side doing something to improve conversion rates and gain insights into customer preferences is infinitely better than doing nothing.

Starting with A/B or Multivariate testing as a first step is perfectly OK. However, this should not be the only thing you will do to grow revenue and customer knowledge.

#3 A/B Testing is a business tactic

It’s the repetition of affirmations that leads to belief. And once that belief becomes a deep conviction, things begin to happen.
– Muhammad Ali –

Myth: A/B or Multivariate testing are the tools for quick improvements in the conversion rate. 

Story: The majority of brands start their testing expecting to immediately ‘hit the ball out of the park.’ Such high upfront expectations will most likely produce quick disappointments.

Most eCommerce sites are fairly well designed and constructed. It’s not easy to get additional lift.

That’s why most initial tests fail.

A misalignment in expectations creates a domino effect. Money gets wasted, people or agencies get fired, and brands give up on testing altogether.

Reality: A/B or Multivariate testing is a business strategy.

Real money and a real competitive advantage is gained through a commitment to continually learn and improve.

Instead of being fixated on the conversion lift the focus should be on gaining customer knowledge. Every test result is valuable and it provides teachable feedback.

Long View: It doesn’t matter if initial tests failed to provide a conversion lift. What matters is the learning from the test results helping you design follow up tests. That way you will put your brand on the path of sustainable long term revenue growth.

Collaboration: The testing culture is the culture of collaboration. This ability stimulates input from different corners of your corporate universe. No idea is a bad idea until proven otherwise.

Innovation: A/B or Multivariate testing stimulates creativity. The process of ongoing experimentation and learning leads to continuous improvements of customer experience, marketing, and business strategy.


Let’s imagine that you were running a small multivariate test and that you were able to attribute the overall lift to individual variables:


In this example, the first variable marked as ‘B-CTA’ has a negative impact on the overall revenue per visit.

One might conclude that this was a bad test idea and throw it away. The right approach is to think about what this is telling you about customer preferences and how to use this as a pointer to introduce a better test idea.

#4 Trusting A/B testing best practices

My advice is to never listen to any advice, not even this one.
– Anonymous –

Myth: When designing A/B or Multivariate tests we should rely on the best practices. 

Story: Why reinvent the wheel? Let’s design tests based on widely publicized best testing practices.

The issue is not if best practices are good or bad. The real issue is which best practices are good for your brand.

By blindly applying best practices you are unlikely to produce significant results.

Poor results will have a chilling effect on the testing team’s morale. If
best practices did not work how can ‘amateurs’ do better on their own? …And testing initiative dies.

Reality: Successful A/B or Multivariate tests are based on a mix of best practices and brand specific knowledge.

Best practices should be used only as a starting point of the testing journey but not as its final testing destination.

When considering best testing practices you should consider the following:


By now you must be familiar with the Internet term: fake news.

How about an equally damaging byproduct: fake expert advice.

Inbound marketing has created a ‘gold rush’ for new content creation.

Make no mistake. The main criteria for publishing new content is not the level of expertise but how keyword friendly is the content.

Who are the content creators? Most often they are recent college graduates or journalists.
Key differentiating feature: ability to write 1,800 words per day.

How do they do it? By writing ‘new’ content based on thoughts found in other articles. All of this is creating a massive proliferation of worthless duplicate ideas.

As much as we respect the hard work and talent of many of our colleagues we believe that the pool of viable best testing practices is still very limited.


Every brand is a unique situation. The best practice that works for one site may not work at all for another.

That’s why the most successful tests are a result of collaboration between those who are intimate with the brand characteristics and those that are experts in the best A/B or Multivariate testing practices.

#5 Trusting your gut feelings

Never ignore a gut feeling, but never believe that it’s enough.
– Robert Heller –

Myth: When designing A/B or Multivariate tests you should trust your gut feelings. 

Story: Even before the start of a testing initiative chances are that an eCommerce site already reflects the best ideas and knowledge about web visitors and their preferences. Whatever could have been done through intuitive means is already done.

Reality: The biggest testing breakthroughs are achieved through experimentation with counter intuitive test ideas.

Therefore, whenever possible one should incorporate into tests treatments whose nature is opposite to common beliefs and best practices. That way you will avoid the pitfalls below:

Inertia: Brands are testing the same kinds of treatments expecting different results. Instead, they should step outside of the ‘box’ and experiment with different aspects of the customer experience and contrarian test ideas.

Fears: A/B split and traditional Multivariate testing is risky. A small dip in conversion rate can cost a lot of money, which creates fear of introducing new ideas. To minimize testing risks in general brands should use adaptive testing solutions that minimize if not completely eliminate the risk of testing bad ideas.

‘Sacred Cows’: Style and brand identity elements are usually off limits during tests. Nevertheless, if those elements are suspicious they should be tested. Even if these elements can’t be changed it is a good practice to know the cost or value of the branded elements. This knowledge can be leveraged with UX and branding teams to evangelize the use of testing solutions during periodic brand or UX makeovers.

#6 A/B Or Multivariate Testing is for big boys

It’s not the size of the dog in the fight, it’s the size of the fight in the dog.
– Mark Twain –

Myth:  A/B or Multivariate testing is expensive and appropriate for big brands.

Story: There is a perception that you need to be a tech-savvy marketer with a large budget and lot’s of traffic to do A/B or Multivariate testing.

High technical complexity: A large number of marketers believe that A/B or Multivariate testing is a complex undertaking that requires custom programming, understanding of algorithms, and a grasp of statistics.

High cost: The perception is that testing projects require significant budget for software licensing and agency services.

High traffic: Marketers feel that reaching statistically significant results require the volume of traffic their site does not have.

Reality: A/B or Multivariate testing can be done on a low budget by non technical marketers and for companies of almost any size.

Low technical complexity: Modern testing tools are designed for non-technical users. They offer visual point-and-click tools so that non-technical business users can implement the tests without dependency on their IT department.

The image below shows an example of HiConversion’s visual editor:


Low cost: A/B testing doesn’t have to be expensive. If you’re operating on a near-zero-dollar budget, there are free split testing tools available like Google Analytics’ Content Experiments or HiConversion’s testing module. Though with Google’s tool you will have to be a bit more tech-savvy to implement it.

Paid tools have a higher upfront cost but are much less technologically challenging and more feature rich. The increase in cost is easily justified through reduction of the test overhead costs and the value of additional features and insights.

Low traffic: In the case of a simple A/B test, you don’t need a ton of visitors to test results — you just need enough to reach statistical significance (the point at which you have at least 95% confidence in the results).

Adaptive testing tools are further reducing traffic requirements. Instead of splitting visitors on winning and losing treatments evenly, they are sending disproportionally more traffic to winning treatments and thus accelerating the speed of testing.

The bottom line is that testing pays for itself and then some.

Based on its directly measurable ROI almost every company should be able to afford a premium software license together with all professional services needed to establish and run a successful testing program.

#7 A/B Testing should be treated as a fixed business expense

He who seeks for gain, must be at some expense.
– Plautus –

Myth:  A/B or Multivariate testing expense should be treated as fixed cost of operation.

Story: A/B and Multivariate testing solutions are most commonly provided as SaaS solutions under some kind of software licensing contract. The procurement of such solutions requires budget and other purchase approvals.

This budgeting treatment limits the ability to marketing and business teams to easily acquire testing solutions and services.

Reality: Testing should be treated as a variable cost of sales and marketing and not as a fixed cost of operation.

For that to work one needs the following:

Pay-as-you-go Pricing: As an alternative to a fixed cost model companies can now license testing solutions on an as needed basis. Similar to pay-per-click advertisement cost the cost of the testing solution is proportional to the number of web visitors.

Classification as Sales or Marketing Expense: By using a pay-as-you-go pricing model companies can treat their testing cost as a variable cost of sales and marketing. Instead of spending 100% of the existing budget on advertisement and promotions companies can allocate a small percentage of that budget to increase conversion rates, grow revenues, and increase the overall marketing ROI.

A variable cost classification will help you save your testing programs from budget cuts that are driven by a desire to reduce fixed cost of operation.

#8 Test baby test

By failing to prepare, you are preparing to fail.
– Benjamin Franklin –

Myth:  Test everything and anything possible.

Story: Driven by good intentions impatient marketers tend to spread testing activities across the entire eCommerce site. They run many A/B or Multivariate tests at the same time while testing anything and everything that comes to mind. They work very hard but do not get the significant lift they seek. They then get disappointed and just quit testing.

Reality: The most successful brands are disciplined and laser focused on testing the high impact areas of the site.

To prioritize they leverage advanced web analytics solutions capable of detecting weak links in the sales conversion funnel.

The key question is what and in which order you should you test?

The answer to this question requires establishment of a testing methodology:

Priorities: There are too many site elements that can impact customer experience and conversions. Testing experts start by using web analytics solutions and actionable data to identify high impact weak links.

For example, the funnel analysis below shows that a brand has a kink at window shopper level (visitors who are visiting a product detail page). As a result one will prioritize testing of the product detail page template.

Roadmap: After establishing priorities of areas and elements with high potential for improvement you should create a testing roadmap.

It can be as simple an exercise as creation of a spreadsheet with associated Gantt chart as shown below:


Continuous Testing: A roadmap and its periodic updates will enable you to run your testing program in the most effective way. That way you will establish a healthy pipeline of tests that are at different stages of preparation, implementation, or live testing.

#9 Expecting big wins from big changes

It is better to take many small steps in the right direction than to make a great leap forward only to stumble backward.
– Chinese Proverb –

Myth:  To be successful A/B or Multivariate tests treatments must include big changes of the site. In other words if you try small changes you should expect small results.

Story: We call this the ‘Which Test Won’ syndrome. Vendors and agencies are bragging about tests that produced tremendous conversion lifts. This is setting up false expectations about what to expect from your tests.

Those who are new to testing think that they also must aim high and achieve a big conversion lift. In their mind this is only possible only if they test a big site change.

Unfortunately, big changes are always complicated. They require technical assistance, they create delays, and they are often risky. Worst of all, they never guarantee a lift.

It’s like playing roulette. Yes, there is a chance to win but most of the time you will lose.

Reality: The magnitude of changes introduced by a test treatment does not correlate with A/B or Multivariate testing results.

It is quite common to have seemingly invisible test changes produce a significant lift.

The most successful brands are focused on the quality of test ideas and continuous learning.

Technology bias

Use of different testing solutions creates different testing mindsets:

A/B Testing Mindset: We can only test a limited number of test treatments. That’s why test treatment ‘B’ (treatment A being control – no site change) will be one big change comprised of many element changes playing as a single treatment. Once we get a big win we will then trim the edges and test smaller items.

This is a top down approach. It makes sense only if you can quickly score a big win. If tests are failing there is no way of knowing which changes within a single big treatment was positive or negative so that the next test treatment can be improved.

Multivariate Testing Mindset: We have the freedom to test many treatments at the same time. The system will automatically play permutations of the individual treatments in search of the top performing combinations.

Multivariate testing is a bottom up approach. As the system is playing with component changes a winning pattern (a big change) is slowly emerging. Knowledge about performance of smaller changes is providing feedback for continuous improvement.

#10 Greed is good

I am afraid that our eyes are bigger than our stomachs, and that we have more curiosity than understanding. We grasp at everything, but catch nothing except wind.
― Michel de Montaigne –

Myth:  Go big or go home. If tests are not producing big results we should not waste our time.

Story: Expecting that each test should be a winner or even worse that each test should produce double digit lift is completely self defeating.

If a site is horrible one can easily uncover something that produces 50% lift. But after a while there will be no more low hanging fruits.

As a matter of fact most eCommerce sites have a descent UX. That’s why massive test lifts in eCommerce are rare.

Reality: A/B or Multivariate testing is about long term results.

Instead of focusing on a single test online businesses should consider the strategic value of A/B or Multivariate testing:

Sustainable Revenue Growth: It doesn’t matter if individual tests are producing 1%, 3%, 8%, or 11% lift. What matters is the commitment to continuous testing.

On the long run many small increments will compound to become a big long term gain.

Customer Knowledge: Test treatments are like temperature probes that are measuring how visitors react to site changes.

Although this is considered a soft benefit of testing, knowledge about unique visitor preferences enables a data driven business strategy and long term success.

#11 A/B testing your way into success

Common sense is what tells us the earth is flat.
– Albert Einstein –

Myth:  Improving conversions or customer experience is a simple problem. All you have to do is to run a limited number of well designed A/B test.

Story: Intuitively we all know that eCommerce is a very complex system with many moving parts. And yet at the same time we also feel that we can easily solve our conversion problems with simple A/B tests.

Reality: A/B or Multivariate testing is an infinitely large problem.

To illustrate the size of the conversion rate and customer experience optimization problem let us visualize its core components.


UX / Functionality: A web page can be designed in an almost infinite number of ways. eCommerce requires interaction and functionality behind page elements which must align with visual page features. Every detail matters.

Visitor Behavior: Different audiences have different consideration criteria. They are entering your site at different entry points and navigating in an almost infinite number of ways.

Audiences: Each visitor can be segmented by hundreds of different criteria. Permutations of those attributes are creating an almost infinite number of different buying audiences.


Permutations among different aspects of the testing problem are creating an unimaginably large number of options. To illustrate the point let’s use a simple checkerboard below:


Every eCommerce site has many more variations of pages, content elements, and treatments than the checkerboard whose total number of permutations of the black and white fields is equal to:

264 = 18,446,744,073,709,600,000

Let’s now imagine that number was equal to kernels of grain.


The total amount of grain represented by the number of permutations of the checkerboard fields is larger than the total amount of grain ever farmed on planet Earth.

So, do not kid yourself. Trying 2-3 options in the sea of quintillions of possibilities is akin to trying to find a needle in a haystack of haystacks.

That’s why real solutions require heavy duty capabilities, like artificial intelligence and rich customer experience data.

#12 Statistically significant results do not lie

If your experiment needs statistics, you ought to have done a better experiment.
– Ernest Rutherford –

Myth:  By running our test we will learn exactly what works and what does not.

Story: It is in human nature to yearn for certainty in everything that we do. That’s why it is so comforting to A/B test, get statistically viable results, and become certain about results.

Reality: The real world does not have an exact nature. Instead, everything surrounding us behaves in random ways. All we can do is to assign a degree of probability that what we observed was correct.

Here are several misconceptions that are a common cause of big testing mistakes:

Confidence level: When interpreting 95% confidence of test results non-statisticians would often think that they are getting at least 95% of observed lift.

That’s wrong. Confidence level shows the likelihood that the test results were not based on noise. In other words 95% confidence level only tells you the probability that you observed any positive result at all: whether that result is close to zero or is multiple times larger than the test conversion lift.

Repeated significance testing errors: If you run A/B tests on your website and regularly check ongoing experiments and see on your dashboard a result that is statistically significant, there’s a good chance that it’s actually insignificant.

Let us explain the source of the problem with a simple example:


Suppose you are analyzing end results of an A/B test whose sample size was fixed in advance. Meaning, a tester did not interfere during the course of test. Instead he patiently waited until the predetermined sample size was was reached.

The table below shows four possible intermediate outcomes at time points when the test had 1000 and 5000 web sessions:


The most interesting are Outcomes 2 and 3 that started one way and then flipped mid-course and ended the other way.

Let’s now play a real life scenario where there is a proactive tester that is closely monitoring test results and stops the test the first time he reaches his desired confidence level.


The picture above shows that in case of Outcome 2 the test will be stopped prematurely while producing a false positive result.

#13 Past success is the best predictor of future success

You cannot escape the responsibility of tomorrow by evading it today.
– Abraham Lincoln –

Myth:  Run tests until you reach statistically significant results. Then stop and permanently implement a winner.

Story: It’s like driving a car while watching only into the rear view mirror. It works only if the road ahead is the mirror image of the road behind. Anything different will cause a bad outcome.

Reality: eCommerce is like the stock market. Demand, preferences or sentiment are constantly changing.

This is a big problem that makes traditional split-brute force methods obsolete.

Here is some advice on how to minimize the impact of the time varying nature of eCommerce:

Test duration: Even if your site has a lot of traffic and even if you are seeing statistically significant results you should make sure that your test spans across multiple-weeks. This will average out variations in day by day or week by week results.

Post validation: In addition to statistically significant results you should also look for stable results that are performing consistently well over longer periods of time. In our experience only 10% of statistically significant results are also evergreen and stable.

That’s why you should post validate your test results making sure that you do not permanently implement a false winner.

#14 A/B testing is a destination

Success is a journey, not a destination. The doing is often more important than the outcome.
– Arthur Ashe –

Myth:  We will run a couple of A/B tests to improve our conversion rate and then focus on other priorities.

Story: Similar to diet pills, marketers are in desperate need of an easy solution for bad conversion rates of their eCommerce sites. A/B testing has the appearance of such an easy solution. The conventional wisdom is to test until some improvements are achieved. Then ‘check the box’ and do something else.

Reality: Testing is never done. This is a continuous process of experimentation and learning. The more you do it the more you see what else can be done to improve conversion rates.

We call it REAL testing. This is a systematic approach and an ongoing process as shown below:


R – Roadmap: you have to use analytics and qualitative assessments to detect weak links and establish testing priorities – focus matters.

E – Engage: you have to use engagement analytics and test design methods to create new tests.

A – Amplify: you have to manage your test – modify treatments as your learn or allocate more traffic to top performers.

L – Learn: avoid average results – go deeper to uncover which audiences reacted positively to test treatments.

#15 Customer Experience Analytics is optional

Do not go where the path may lead, go instead where there is no path and leave a trail.
– Ralph Waldo Emerson –

Myth:  All we need to know is which treatment won.

Story: A/B testing is a very old concept. To make it more powerful it was later augmented with multivariate testing to enable simultaneous experimentation with more treatments.

Practitioners tend to focus on test treatment statistics while losing sight of the much bigger picture.

Reality: Testing is not about higher conversion rates. Instead, it’s about customer knowledge that will empower you to make better business decisions and more effectively compete in the market place.

Higher conversion rate is a score but not the goal of your testing initiative.

Running tests creates a rich customer experience data set that most of the time resides in its own data silo.

Web analytics integration: Professional grade A/B/ testing tools provide web analytics integration capabilities. The problem is that such integration must be done on a per test basis.

That’s why integration is rarely done. And, even if done it’s limited to only a few data points, like treatment / control metrics.


Customer Experience analytics: The ultimate data solution is full integration between test data and web analytics.

That’s only possible if the testing solution is built on the foundation of rich web analytics.

FREE A/B testing – HiConversion account is always free + 1 live A/B treatment at a time*

* $9.95/month for up to 10 live A/B treatments at a time – see pricing guide