What Is A/B Testing?

One of the easier types of CRO tests is called an A/B test, also known as split testing. A/B testing holds back a part of your audience to test a number of variations of a campaign, so that you can observe how one version of a piece of marketing content performs alongside another, like a green call-to-action button versus a red one, to see which performs better.

To run an A/B test, you need to create two different versions of one piece of content to show these two versions to two similarly sized audiences, and analyze which one performed better.

For example, let’s say you want to see if moving a certain call-to-action button to the top of your homepage instead of keeping it in the sidebar will improve its conversion rate.

Now, let’s see a list for setting up, running, and measuring an A/B test.

How to Conduct A/B Testing

Before the A/B Test

Pick one variable to test.

As you optimize your web pages and emails, you might find a number of variables you want to test. But you’ll have to test one independent variable at a time to measure its performance. Look at the various elements in your marketing resources and their possible alternatives for design, wording, and layout. Other things you might test include email subject lines, sender names, and different ways to personalize your emails. Keep in mind that even simple changes, like changing the image in your email can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.

Identify your goal.

Although you’ll measure a number of metrics for every one test, choose a primary metric to focus on before you run the test. In fact, do it before you even set up the second variation. This is your dependent variable. Think about where you want this variable to be at the end of the split test. You might state an official hypothesis and examine your results based on this prediction.

Create a ‘control’ and a ‘challenger.’

You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you’re testing as your “control”. If you’re testing a web page, this is the unaltered web page as it exists already. If you’re testing a landing page, this would be the landing page design and copy you would normally use. From there, build a variation, or a “challenger” — the website, landing page, or email you’ll test against your control. For example, if you’re wondering whether including a testimonial on a landing page would make a difference, set up your control page with no testimonials. Then, create your variation with a testimonial.

Split your sample groups equally and randomly.

For tests where you have more control over the audience – like with emails – you need to test with two or more audiences that are equal in order to have conclusive results. How you do this will vary depending on the A/B testing tool you use.

Determine your sample size.

How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you’re running. If you’re A/B testing an email, you’ll probably want to send an A/B test to a smaller portion of your list to get statistically significant results. Eventually, you’ll pick a winner and send the winning variation on to the rest of the list.

If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size. You’ll need to let your test run long enough to obtain a substantial number of views, otherwise it’ll be hard to tell whether there was a statistically significant difference between the two variations.

Decide how significant your results need to be.

Once you’ve picked your goal metric, think about how significant your results need to be to justify choosing one variation over another. Statistical significance is a super important part of A/B testing process that’s often misunderstood. The higher the percentage of your confidence level, the more sure you can be about your results.

During the A/B Test

Use an A/B testing tool.

To do an A/B test on your website or in an email, you’ll need to use an A/B testing tool. If you’re a HubSpot Enterprise customer, the HubSpot software has features that let you A/B test emails, calls-to-action and landing pages.
For non-HubSpot Enterprise customers, other options include Google Analytics’ Experiments, which lets you A/B test up to 10 full versions of a single web page and compare their performance using a random sample of users.

Test both variations simultaneously.

Timing plays a significant role in your marketing campaign’s results, whether it’s time of day, day of the week, or month of the year. If you were to run Version A during one month and Version B a month later, how would you know whether the performance change was caused by the different design or the different month?

Give the A/B test enough time to produce useful data.

You’ll want to make sure that you let your test run long enough in order to obtain a substantial sample size. Otherwise, it’ll be hard to tell whether there was a statistically significant difference between the two variations.

Ask for feedback from real users.

A/B testing has a lot to do with quantitative data, which won’t necessarily help you understand why people take certain actions over others. While you’re running your A/B test, why not collect qualitative feedback from real users?

You might find, for example, that a lot of people clicked on a call-to-action leading them to an ebook, but once they saw the price, they didn’t convert. That kind of information will give you a lot of insight into why your users are behaving in certain ways.