The Definitive Guide To Conversion Optimization

The Definitive Guide To Conversion Optimization

Written by Neil Patel & Joseph Putman

Chapter Eight

A/B Testing Mistakes Even The Experts Make

As you’ve learned from this guide, CRO can increase conversions as much as 221% to 363% with the right testing methods and the right approach. These kinds of results will cut your cost per acquisition in half (or more) and generate additional revenue for your business. You can then reinvest the money back into your business, or you can simply benefit from the increased profit. The choice is yours.

CRO is also beneficial for many types of organizations. Everyone from SaaS businesses to eCommerce stores to political parties can benefit from testing their websites to see what copy, layout, and design combination will convince more people to take the action they want them to take. Organizations can test everything from homepage headlines to e-mail subject lines and more with each and every improvement furthering the organization’s goals.

We’re confident that once you get started with CRO, you won’t be able to stop, but there are some pitfalls you need to avoid. These pitfalls are ones that even seasoned marketing professionals fall into. Whether they’re a CRO consultant or a marketing director, there are a handful of common mistakes that are easy to make. Our goal in this chapter is to outline these pitfalls and to help you avoid them. Let’s dive in.

Mistake One Expecting CRO To Be A Silver Bullet

As awesome as CRO is, it’s not a silver bullet that solves every problem for your business. Sometimes the problem lies deeper than surface conversions because the problem is an underlying flaw within your business.

Let’s say, for example, that you start a SaaS business that provides a loyalty program for eCommerce stores. You expect it to be a hit, so after building the product, you release it to the public and wait for the sales to flood in. Two months later, the floodgates are still closed, and you’re left wondering what’s going on. Maybe CRO will help!

You start reading CRO articles and stumble upon this guide at Quick Sprout. Thinking CRO will solve your problems, you dig in, read everything you can, come up with some hypotheses for testing, and then begin your first test. Surely CRO will solve everything and get you back onto the right track.

In some cases, this may work because the problem could be that you’re not explaining what you do well enough which is costing you sales. But sometimes that’s not the case.

In other cases, there’s a bigger problem than convincing more people to sign up. It’s possible that people aren’t interested in what you’re selling because there’s not a product market fit, i.e. there’s just not a demand for your offering.

In situations like this, you either need to pivot and create a new product or else find out how to tweak your current product so it matches what customers want. There’s still hope for your business, but you need to do more than CRO to get back on track. So how do you know which case matches your business?

First, you need to pay attention to whether or not people are signing up, using your service, or buying your product. When you sell to someone in person, do they sign up and use what you’re selling? If yes, then there’s a good chance that there’s a demand for what you’re selling. Usually, if you can sell the product in person then you can find way to sell it online. You just need to find out how to duplicate your offline sales pitch for the online world.

Another test you can run is to see how many people would be disappointed without your service. Sean Ellis, the founder of Qualaroo and the first marketer at Dropbox, Lookout, Xobni, LogMeIn, and Uproar, believes you’ve found product market fit when 40% of your customers would be very disappointed without it. This number is somewhat arbitrary, but it’s a number he’s found to hold true after looking at almost 100 different startups. You can find out what this number is for your customers by conducting a survey to see whether or not they would be disappointed if your business closed its doors. There’s a good chance you have a viable product if at least 40% of customers would be very disappointed without your offering.

Sean Ellis is the founder of Qualaroo and was the first marketer at Dropbox, Lookout, Xobni, LogMeIn, and Uproar.

These two tests will help you determine whether or not conversion rate optimization can help your business. If you do have a product market fit for what you’re selling, then conversion rate optimization can help. If not, you’re better off tweaking your product or improving your business model before worrying too much about how to increase conversion rates.

Mistake Two Running Before And After Tests

Sometimes it’s tempting to run a before and after test, even when you’ve been warned not to. With a before and after test, you measure conversions on your site for a period of time, make a change, and then measure conversions for another period of time. Instead of simultaneously testing two or more versions, you test different versions for different periods of time.

As we’ve mentioned before, this is a bad idea because traffic quality varies from day to day and week to week. It’s not uncommon for a page to convert at 15% one day, 18% the next, and 12% the day after. It’s also not uncommon for a page to convert at 15% for one week and 18% the following week. These changes are common based on the moods that visitors may be in, the economic climate, the quality of traffic, or any number of other factors.

Changes in traffic quality will impact conversion rates which is why you need to remember to run A/B tests, not before and after tests.

For example, your site might get covered by Technorati which increases traffic but decreases conversion rates. A large number of people visit your site, but they’re not as qualified as someone who clicks through from a Google ad. Thus, your conversion rate gets watered down, and if you tested a new version one week to the next in this situation, the results would be affected.

The only way to account for all these factors is to run a scientific A/B or multivariate test where each version is shown to a proportionate number of customers throughout the testing period. By randomly showing the two (or more) versions to visitors in the same week, there’s a much greater likelihood that results will be statistically relevant.

This is why it’s important that you never run a before and after test. There’s no way to know for sure how relevant the results are which means you have no way to make an educated decision about which version is the best. You always need to run an A/B, multivariate, or split test to get the most accurate results and to make decisions that will benefit your business.

Mistake Three Ending A Test Too Soon

As we’ve mentioned in previous chapters, it’s important to run your tests for at least 7 days and until there’s a 95% or higher likelihood of finding a winning version. These are the basic rules you should follow. In addition, you can consider running tests until there are at least 100 conversions. All three of these guidelines will help you to get more accurate results.

The problem, however, is sometimes it’s difficult to wait. You come up with a hypothesis, make a change, and begin testing. After two days, version one increases conversions by 102% with a 97% likelihood of being the winning version. That matches one of the two important testing guidelines so you decide to declare it a winner. Besides, you’ve got other tests you need to run so why not pick a winner and move on?

The problem is that you have no idea if this is statistically relevant or not, even though the test says it has a 97% chance of winning. It just hasn’t run long enough, and it wouldn’t be completely unheard of for this test to reverse course and for the current variation that’s increasing conversions by 102% to decrease them by 30% a week later, which is why it’s important to follow the guidelines and to be patient with your tests.

Peep Laja from ConversionXL wrote about a test where the challenging variation decreased conversions by 89.5% after two days of testing with a 0% chance of winning. At this point the client was ready to call it quits, but Peep recommended running for a bit longer. Eventually, 10 days later, the challenger was increasing conversions 25.18% with a 95% chance of winning.

After two days, the challenging variation decreased conversions by 89.5%

Ten days later the challenging variation increased conversions by 25.18% proving that the initial sample size was too small to produce statistically significant results.

What happened? In short, the initial sample size was too small. Two days wasn’t long enough to determine a winner, even though the testing software said otherwise. Peep recommends using this stat calculator to determine whether or not the sample size is large enough to declare a winner. If not, you should continue testing until it is.

Mistake Four Trusting What You Read

Another mistake you can make is trusting what you read online and blindly implementing someone else’s test results on your site. Maybe another site changed their button copy and increased conversions by 28%. That’s great, but there’s no guarantee you’ll get the same results on your site.

You might stumble on a post about the Performable test we mentioned previously where conversions went up 21% by changing the button color from green to red. Assuming that green is the magical conversion color, you decide to make the same change on your site, but without testing. Unbeknownst to you, changing the button color decreases conversions by 15%, but because you didn’t test, you’ll never know.

There are a lot of different factors that go into why a change works on a site. Maybe a site’s customers are looking for something in particular, or maybe that certain color contrasts with the site’s primary color, draws attention to the CTA, and gets people to take action. Who knows. All you know is whether or not something works after you run a test, but if you don’t test, you could end up implementing someone else’s result only to shoot yourself in the foot when you accidentally decrease conversions.

Another problem, which we touched on before, is that the results could be reported inaccurately or the test could have been run improperly. When you read a test on another site, there’s no way to know for sure that it was run to a 95% confidence level or higher unless that gets reported, and even then, you can’t know with 100% certainty. The only way to know absolutely for sure is to test the change yourself to see how it impacts your site.

Mistake Five Expecting Big Wins From Small Changes

Yet another big mistake people make is expecting big wins from small changes. They might add one line of copy to a page or only test button copy or headline changes. All of these are great things to test, but frequently, they’ll only get you so far.

In many cases, it’s best to test radical changes to see how conversions are impacted. Then, once you’ve come up with a variation that significantly increases conversions, you can continue tweaking to inch conversions up even higher. But if you only tweak your site, you’ll never dramatically increase conversions which means you’ll never find a new baseline you can work from before testing additional tweaks.

The tests run on Crazy Egg are a great example of this. The first big win came from changing the homepage to a long-copy sales letter. Then, after a big win from a drastic change, the Crazy Egg team tested call-to-action buttons and other small changes to improve the results even more.

Now this isn’t to say that you shouldn’t ever run small tests. If you’re just getting started, headlines and button copy are great places to start. But if you’ve already practiced with those types of test and you have a pretty good idea what you’re doing, you may want to try mixing things up a bit and test an entirely new version of your homepage because you have a better chance of getting a big win from a drastic change than from a small tweak.

Mistake Six Thinking The First Step Is To Come Up With A Test

We already covered this in chapter two when we talked about gathering data, but we can’t stress enough how important it is to gather data before coming up with your first test. Yes, you start with a test, and maybe, if you know enough about your site and customers already, you’ll end up getting lucky and improve conversions. But this just isn’t the best route to take.

It’s much better to survey your customers first because you aren’t the one purchasing what you’re selling. Your customers are the ones who will pay for your offering. That’s why you need to find out what they think about your product and what hurdles they mention about why they aren’t ready to buy.

Good questions to ask at this point include:

  1. Is there anything else you’d like to see on this page?
  2. What’s preventing you from making a purchase?
  3. Is there anything that would convince you to make a purchase today?

These types of questions will teach you more about your customers and will reveal what’s preventing them from buying. Once you gather the data, analyze it, and come up with a hypothesis, then you’re ready to run a test and measure the results.

Mistake Seven Running Too Many Tests

It’s easy to make the mistake of running too many tests. Maybe you get excited about A/B testing, see a lot of opportunities on your site, and run 20 tests in the first month. Even if you have enough traffic for that many tests, it’s not recommended.

The reason it’s not recommended is that it takes time to gather data, analyze the data, run a test, measure the results, and then decide what to do next. It’s ok to run back to back tests in some cases, but you don’t want to run too many tests in too short of a period of time.

The main issue with this is that every test is an opportunity to decrease conversions and therefore decrease revenue. The hope is that you’ll get a conversion boost, but that’s not always the case. And since it’s not always the case, you run the risk of decreasing revenue with the tests you run, meaning if you run too many tests, you can significantly impact your revenue stream.

It’s much better to take your time to gather sufficient data, analyze the results, and then run educated tests based on the data you gathered than to quickly burn through a lot of tests that may or may not benefit your bottom line.

Ten days later the challenging variation increased conversions by 25.18% proving that the initial sample size was too small to produce statistically significant results.

Mistake Eight Testing Too Many Variables

It’s very possible to test too many variables at once. If you test too many things at one time, you won’t know what’s affecting conversions and may miss out on a positive improvement. You might change the headline, the button copy, and add a testimonial and then be disappointed with the results when one of the changes may have improved conversions on its own (such as the testimonial) while another change drags the conversions down (possibly the headline).

Timothy Sykes had this experience recently with his sales letter. At one point he changed his video, headline, copy, and the form field design. This lowered conversions and didn’t give him any idea whether or not some of the changes were worth implementing.

Testing too many variables at once can leave you scratching your head wondering which changes improved or decreased conversions and which ones should be implemented.

So on the one hand, sometimes you need to test radical changes to see if it improves conversions, but on the other hand, you want to be careful about testing too many variables too frequently because you won’t find out what did and didn’t improve conversions.

Mistake Nine Testing Micro-Conversions

The next mistake CRO professionals make is testing micro-conversions. This means that instead of measuring your end goal, you measure the number of conversions at an earlier step in this process.

An example of this would be measuring the number of people who click on the 15-day free trial link on a site like Help Scout. It’s great if more people click on the link, but what really matters is for more people to fill out the form and to actually sign up for a free trial.

Just because more people click on a free trial link doesn’t mean more people will sign up. That’s why you need to measure both micro and macro conversions to make sure you’re accomplishing the goal you set out to accomplish.

The problem with measuring micro-conversions, which is something we touched on briefly earlier in this guide, is that you never know how a micro-conversion will affect the end goal for your product. Changing a headline could get 10% more people to click on a free-trial link, but it could lead to a smaller percentage of people signing up for a free trial.

It’s entirely possible that the headline tricks people into clicking somehow but annoys them once they get to the free trial form. In a case like this, it doesn’t matter that more people take the next step if they’re not completing the goal you really want them to complete.

With that said, quite often it’s good to improve micro-conversions. You definitely want more people to go from step one to step two because the more people there are on step two means there are more people who are further along in your funnel and who may convert. But the point is that you can’t completely trust these numbers. You also need to measure conversions for your final goal so you can be sure that the increased micro-conversions are always improving the macro-conversion that matters the most for your business.

Mistake Ten Not Committing To Testing Everything

One of the biggest mistakes you can make is not testing every change you make on your site. You may think that rearranging the homepage or swapping out a current picture for a new one won’t affect conversions, but there’s a good chance that it will. The only way to find out is to run a test.

Sometimes a CEO wants something to be changed and doesn’t care that much about conversions. He’s convinced that it will help the business, so he wants an image or feature added. And if your business hasn’t committed to testing as an organization and if you don’t have buy in at every level, then you may not have any choice but to add the extra feature.

But in another scenario, you do have an option. This is the scenario in which the entire organization is on board with testing and understands how important it is. Everyone from the CEO to the Communications department understands how valuable CRO is and how every little change can affect conversions. At this point, you’re committed to testing, and you know you can’t make any changes without running an A/B test first to see how it impacts sales. Once you have this level of commitment to testing, you’ll be assured that you won’t make the mistake of implementing changes without testing first.

Chapter Eight Summary

Throughout this guide we’ve talked about what conversion rate optimization is and why it’s so important. We covered why you need to begin by gathering data and how to analyze it once you do. After that we discussed how to run your first test and how to measure the results. Then, in the final chapters, we discussed what exactly you should be testing, 30 expert tips for better conversion results, and 10 A/B testing mistakes even the experts make.

It’s our hope that all of these chapters will help you to get started in the right direction with conversion rate optimization and will lead you down the path to win after win for your organization. With a little bit of practice and a fair amount of data gathering, we’re convinced you can improve conversions for your website and generate more leads and sales. We wish you the best of luck!

Four ways to share