How To Do A/B Testing, and Why You Should Never Stop Doing It
“The only thing I know is that I know nothing.” – Socrates
While there is no way the ancient father of Western philosophy would know anything about KPIs, conversion rates, or pixels, there is wisdom to impart from this quote on the world of digital marketing. What do we as brands really know about our customers, our value propositions, and ourselves? Do we really test against our truths?
As a marketing agency, we often see businesses and brands follow the same sequence of logical flow in their marketing efforts.
- Goals & Metrics
- Conduct Research
- Determine Audiences
- Set Budgets
- Select Marketing Channels
- Deploy Creative
- Miss Goal
- Abandon Hope
It’s not to say that these steps aren’t crucial in every marketing campaign, but what often happens is that months of planning go into just getting a campaign off the ground, and if (or more likely when) it doesn’t achieve immediate results, brands see it as a failure and start over with something completely different. Marketing is more than just pictures and copy to communicate what you’re selling—it’s also a science.
Marketers and brands, alike, must question assumptions, make hypotheses, and test them regularly to find out what works, and in many cases what doesn’t.
There are many testing methodologies ranging in complexity based on the problem and the capability of any one person or team to effectively manage them. The simplest is A/B Testing. In the following sections, we will explore what A/B testing is, how to eliminate variables when building your test, and why it’s important to never have a finish line (kind of).
What is A/B Testing?
A/B testing is a method that aims to test two variations of a single input. That input can be in the form of a webpage, email, or ad. The ultimate goal is typically to determine which one performs better (measured by a specific KPI, but we’ll get into that in the next section).
The two versions, A and B, are identical except for one element that is changed between them. This element can be anything from the headline, the copy, the images, or the call-to-action.
By randomly showing each version to a portion of your audience, you can collect data on which version leads to more clicks, conversions, or engagement. This data helps you make data-driven decisions on which version to use moving forward, with the goal of optimizing your marketing efforts and achieving better results.
How to structure your test and manage variables?
At the core of A/B testing it’s easy to assume that it is the pursuit of an answer, a piece of knowledge that promises to solve your problems, but here’s the catch. Great answers only come from great questions. Some easy questions to ask before you get started are:
- What is the element you want to test? Copy, Image, Colors, CTA, etc.?
- What is the defining KPI that will be used to measure success or change?
- Do you have benchmark KPIs to test against to measure the magnitude of success?
- Do you have an audience sample large enough or consistent enough to compare effectively?
The goal of A/B testing is to ask very specific questions which usually provide very simple but definitive answers, then take those answers to define new tests with more information. You’re not limited to a finite amount of tests, though your budget may say otherwise.
The best way to structure an A/B test is to eliminate all variables that may cloud your answers. True scientific papers have dedicated sections that evaluate, explain, and mitigate variables and potential bias and your tests should make the same level of consideration.
Some variables are easier to eliminate than others, while some are nearly impossible to, which is OK. Don’t let the pursuit of a perfect test blind you from getting results.
Ideally, your audience would have the memory of a goldfish, or someone who could participate in A and B simultaneously. However, marketers and brands don’t typically have this luxury, unless their audience is goldfish, in that case something tells me they are made of money.
Variables that are difficult to control but should be considered:
- Audience Segmentation: Can your sample audience be split in half and still ensure equal distribution of like-minded individuals to test against? Birds-of-a-feather, flock together
- Exposure Timing: Are your A/B Tests coordinated in the same day, time, week as other tests? Who you are on Monday morning versus Friday afternoon can be very different
- 1st party vs, 3rd party: You have far less control of channels serving creative to the target audience vs. internal lists of targets. One in the hand, are more than two in the bush
How to Execute A/B Testing: Step-by-Step Guide
- Step 1: Research
- Step 2: Create hypothesis
- Step 3: Create variations
- Step 4: Run test
- Step 5: Analyse results and deploy changes
Why testing should have a goal but not a finish line?
While every test should have clear and distinct timelines and goals associated with it, that doesnt mean testing should stop. A/B testing should aim to answer one question to be the most effective. However, the outcomes of a test is usually only a small part of a greater whole.
Knowing which ad between two options shown performs best is like getting first place in a race between two people—it doesn’t mean the winner is the best ad experience possible. Once you complete a test you should pause and do another audit of all potential variables that may have impacted the results of the test.
When you feel comfortable with your findings, it’s time to create 2-3 more variations of the winning ad and test additional variables, continually refining your creative offering to meet your audience’s interests.
Here’s an example of how testing variables in email can increase your open and click rates:
- Subject Lines: The first step to any email campaign is getting people to open it, and their first impression is the subject line. Depending on the size of your audience split test sending an email with 2-3 different subject lines.
- Email Headers/Headlines: Whether you want to know if a specific hero image or opening hook works best, you can test which leads to more clicks.
- CTAs/Buttons: You can test CTAs in multiple ways. Maybe you want to determine which CTA does the best job of communicating the step you want them to take, or maybe you want to test how the placement of CTAs within emails affects click rates.
- Content: Once you can reliably connect with an audience to open and engage with your emails, you can take it to an advanced level and begin testing content. You can first determine what type of content your general audience engages with, and then with advanced tracking setup up you can further segment your email lists to provide more content related to historical interests.If you own a pet store, for example, and send out an email with new products available for all types of pets, you can make some assumptions that anyone who clicked on the dog food product likely owns a dog. Those people should be put into a separate email segment so future emails related to dog products get sent to them, and ensure they don’t get emails about rabbit food.
So now that you have a good understanding of how to ladder your testing to get deeper insights on your audiences, it is important to note there is one more very important reason A/B testing never has a finish line…change.
Whether we like it or not, the landscape is always changing
People’s preferences, the competitive landscape, your offering, and your brand equity change over time and what worked for three months, all of a sudden isn’t a reliable tactic anymore. The testing must continue to stay aligned with a new generation of potential customers.
What does it all mean?
As a business, it can be daunting to start creating A/B testing methodologies into your everyday marketing efforts when getting even one set of creative assets together is a labor of love. The uncertainty of testing and allocating resources to something different from what you already “know” to be effective, can be intimidating.
As you embark on your own A/B testing journey, what you think works might ring true for you now, but you will never truly know it unless you test those assumptions. This is why it is beneficial to always be testing your marketing efforts, because it pushes you to better understand what your audiences want. Those who “know” their audience have a much better chance of establishing relationships with them, and through them generate sales, conversions, or whatever metric they establish as “success”
Challenge your assumptions and your “truths.” The best marketing sits forever on the horizon.