By: Samantha Ferguson
Samantha Ferguson

A/B Testing Best Practices

inbound marketing

There are a lot of inbound rules out there. Always put CTAs at the bottom of your blog, don't include navigation on a landing page, send your emails at 10am on Tuesdays (hint: Tuesday might not be the best day to send), etc. Point being, there are a lot of best practices or rules that may not be universal. One company's success with CTA placement or email send time may not apply to your industry or your service/product. 

So, before you go Googling "where should I put my CTA" or "Should I include emojis in my email subject line", take a step back. Can Google really answer that question for you? Chances are, they cannot. So, what do you do? The answer is A/B testing. You need to experiment to know what will work best for you and your company.

What is an A/B Test?

You may be thinking about all the possibilities with A/B testing, and they are nearly endless. You can test content such as emails, blog posts, landing pages, website pages and calls-to-actions. Before you jump in, there are some important things to consider.

A/B testing is an experiment where you divide your traffic/recipients evenly between to variations of the same content to ultimately make a decision based on the results. Just like any experiment, you need a hypothesis, control, variation and a metric or statistic to measure. 

Instead of making decisions based on recommendations or Googling best practices, you can use A/B testing to make decisions about your content based on data. The results may agree with the "rules" or they may not... You don't know until you try!

How Do I Do an A/B Test?

First, make sure you have the volume of traffic and conversions to support an A/B test. They're not for everyone, so before you jump in make sure you have the framework to support justifying an A/B test experiment.

Establish the Test Subject

Once you confirm you have the volume of traffic or conversions necessary, you need to have something that you're measuring. This could very likely stem from the original question that you Googled, but that is only a symptom of the real issue. You should conduct A/B tests based on a pain that you are experiencing with your current marketing efforts.

It should be an area that you are struggling with or want to improve such as click-through-rates on your CTA, or open rate of your emails. A/B tests provide you data, but they should also be born out of data in order to ensure you're not wasting your time experimenting where there is no need. When you have a list of things you'd like to improve, prioritize them as to not get overwhelmed... like Steve Harvey.

Set a Goal/Hypothesis

Once you have a problem you want to address, you will then want to brainstorm how to improve that metric. This is your A/B test goal. This is where you can bring in those rules or best practices. Research what others have done to improve their same issue, but take it with a grain of salt! Don't jump into a test just because someone told you it would work, you may be wasting your time. Take into account your unique business and industry.

If there are no other sources of solution ideas for your problem, get creative! Think your subject line might be affecting the open rate of your email? Your hypothesis might look something like this: "Reducing the words in my subject line from 6 words to 4 will improve open rates." Maybe your CTA click-through-rate is struggling, you'll want to test something like this "Replacing image CTAs with in-text CTAs will improve my click-through-rate."

Develop a Plan

Be as strategic as you can be here. You want to make your test as focused as possible to reduce any external factors affecting your results. You want to change one. thing. at. a. time. If you are testing CTA click-through-rate and you change the placement AND the design, how can you know what influenced the results you got? Hint: If your hypothesis includes the word and, go back to the drawing board.

Document your plan for testing your hypothesis as well as any relevant metrics that you want to keep track of. You'll want to keep this all in a centralized place to reference later when you're crafting an email and wondering what you decided worked best. If you don't, though, no harm no foul. You can always do another test! 

The Winner Is... card isolated on white background.jpeg

Test and Decide on a Winner

Once you have the framework all laid out, you'll need to execute your plan. Test the hypothesis for at least a week (unless you're A/B testing an email in Hubspot) and once your time frame is complete, look at the results. Which variation met your goal at a higher percentage? There will be a winner, but you will need to use your judgement on how significant that win was. Is it worth testing again to be sure? Are the results too similar to really make a solid conclusion? The winner will be your "best practice" or "rule" moving forward.

Continuously Update

Revisit your tests every once and a while and set a cadance for when you should consider test results "expired." The marketing world is always changing, but not nearly as fast as consumers are. So, be sure you're taking into account any factors that might mean you should revisit your old results. If your results aren't showing any red flags though, don't jump into another test just because! Be sure to make your decisions based on data and user behavior, but don't get comfortable just because you tested once.

Don't Get Discouraged

A/B tests aren't always successful. You may not get better results from the item you're testing, or you may get worse results. Take a step back and learn from your mistakes. Why do you think that happened? What do you think may work instead? Test again! If you continually fail A/B tests, try bringing a fresh pair of eyes in. Your colleague may be able to see an opportunity for improvement that you haven't considered. Don't let a little failure stop you from getting back up on the horse, er... stump?

 food fail eating panda eat GIF

The reality is that A/B tests aren't about winning. They're about learning. You are using these tests to communicate with your audience about what they want. If you have a "failed" A/B test it simply means your audience doesn't want that, and that's OK. Take that experience and use it to decide what they do want.

Best Practices

  • Don't call it too early. You need to be at least 95% confident that the variable you're testing can beat the control. 
  • Use a statistically significant audience size. A good rule of thumb is around 350 conversions per variation.
  • Run tests for at least a week. Days of the week influence your tests greatly so start and end on the same day of the week. 
  • Only test one variable at a time. You don't want to muddy your results with multiple variations.
  • Understand what success means before you test, and learn from your mistakes.  
  • Make it random. Don't segment with smart lists or any groups with commonalities in a test, you could skew your data.
  • Use common sense. Don't test for the sake of testing, and be sure the variable you're testing has an impact on your business. 

Get to Testing!

If you're not testing your content, how can you be sure it's what your audience really wants? Even if you're satisfied with the results you're getting you should always look to improve the things you're doing for the sake of improving user experience. This day and age, it's all about how we can make our prospects and customers lives easier and guide them to our product or service quicker, so what are you waiting for? Get testing today!

New Call-to-action