A/B testing, also known as Split Testing is testing two different versions against each other in a measurable manner. The goal of A/B testing is to collect data on which version is better. The end result is a choice between the two versions based on the data.
In fact, you can A/B test anything, as long as the results of the testing are measurable. The results of the testing provide the company with valuable information, for example on customer behaviour. With A/B testing, a company can create a better customer experience for its customers and improve business efficiency.
How does A/B testing work?
A/B testing starts with a hypothesis about the outcome of the testing. For example; “Would more people subscribe to the newsletter if the button was red instead of yellow?”. Two versions of the same button are then created, one red and one yellow. Nothing changes in these versions (variants) except the colour, as it is important to test only one variable at a time to make the results understandable. If more than one variable is modified at a time, the results may not be clear as to which variable influenced the results.
Once the content is ready and the variable to be tested is selected, the test audience is divided into two audiences, one for version 1 and one for version 2 – for example, in version 1 the button could be red and in version 2 it could be yellow. The test audiences in the target group should be evenly and randomly distributed so that the groups themselves cause/cause as little variation as possible. The next step is to monitor which of the versions collects better results. Once enough data has been collected, the worst-performing button can be dropped and you can make a new variant to test (maybe a blue button).
Common A/B test items for websites
- Call-to-action buttons – CTA buttons can be used to test a wide range of variables, including text, position, colour and size.
- Form placement and functionality on the page – For example, how long a visitor has spent on the site before being presented with a pop-up to subscribe to the newsletter.
- Text and image on the page – Whether a visitor spends more time on the page when there is more text or a picture of a tiger instead of a cat.
- Prices, offers, etc. – Are prices and current offers sufficiently visible on the site?
- Navigation menu – Does a clearer navigation menu layout or size make visitors stay on your site longer or visit more pages?
Order a free analysis of the current state of your website
Measuring the A/B test
In most cases, there would be more than one measure available for the hypothesis being tested. However, you should always stick to one variable and one measure at a time. This will make your measurement as accurate and easy to read as possible without having to analyse through and prioritise multiple measurements. In the red vs. yellow button example, the metric would be the number of button clicks. Clearly measurable results make it easier to make decisions without having to rely on unnecessary guesswork. In the long run, the results of measurement also create a valuable database and help us to better understand our target groups and their behaviour. The key to measurement is to choose metrics that support your business objectives. This could be an increase in website traffic or an improvement in the conversion rate of advertising.
Target audience for testing
In A/B testing, it is also important to consider the size of the audience. The audience should be large enough to get useful results from the test and to minimise bias. How much traffic is enough traffic then? This depends entirely on the company and what is being measured. When determining the optimal audience size, the numbers should be compared to the size of the company’s target groups and, for example, to the normal number of visitors to the site and the conversion rates of advertising.
A/B testing as part of the evolution of marketing
Unfortunately in most scenarios, A/B testing is only carried out in the form of individual tests, where the better performing version is selected on the basis of the results and development work stops there. Instead, A/B testing should be a continuous process and not be limited to momentary results. However, A/B testing should be continuous and the results should not be used as a stop-gap measure. The more items that are tested, the more data about customer behaviour is generated for the company. There can never be too much data about a company’s target groups and their behaviour patterns and, if used correctly, this data will lead to the most optimised business. For example, in the ongoing A/B testing of online advertising, the winning ad is always paired with a new competitor, allowing for continuous improvement. The new ad either continues to test the same hypothesis with a different alternative or changes another element of the ad that is to be optimised. Whether it’s a website or advertising, A/B testing should be part of your marketing evolution.
In digital marketing, anything for which measurable data is available can be tested. The best thing about measurability is that it effectively eliminates guesswork and speculation. The challenge comes in how well you can exploit the data you get from testing to improve your business. Also, remember that A/B testing is an ongoing process and will never be finished. However, when done well, you can be confident that it will lead to a more efficient business process and improved conversions.