Say you have a theory that changing headline copy will increase conversions on one of your landing pages. You’ll probably have a couple different ideas for what changes you can make. How do you choose which one to try out first?
If you’re doing A/B testing, you’ll have to try out alternates one at a time and then wait to compare results. However, with multivariate testing, you can try all variations at the same time and get more conclusive results.
What is Multivariate Testing?
A typical experiment involves changing a single variable and testing how that change performs. To get accurate results, the changed version has to be running under the same conditions as the original, which is why A/B split testing, which tests one variation against a control, is commonly used for website optimization.
Multivariate testing is a more advanced version of A/B split testing in which you can test multiple variations at the same time.
Normally, when you’re making a decision to change something, you have more than one idea for how to do so. Being able to test them all simultaneously allows you to draw conclusions and make improvements faster.
Without a tool to help you run multivariate tests, they can be difficult to run successfully because every variation you add adds complexity to the experiment. With more things to measure, the data sets get more complex and if you can’t ensure all variations are running in consistent conditions then the results won’t be accurate.
However, multivariate testing, or adaptive testing as it’s sometimes called, is becoming an increasingly more common feature of marketing automation platforms.
Best Practices for Experimentation
Even when using software to help you run your multivariate tests, there are still some best practices you should follow to maintain the integrity of your data.
Change one variable at a time
The experiments you run should be built around a single hypothesis, such as “I think changing the headline on this landing page will better position the offer and increase conversions.” So, the only variable that you’d change to test that hypothesis would be the headline.
If you decide to change not just the headline but also the form placement across different variations, you won’t be able to definitively say which change actually resulted in improvements, which will make it difficult to draw conclusions from your experiment that can be applied to the rest of your marketing efforts.
Limit the amount of experiments you run simultaneously
If you run too many experiments within your company at the same time, they can inadvertently impact each other’s results — even if they’re meant to be separate experiments.
Design your experiment around the goal you want to achieve
The variable you change should be determined based on an analysis of how your marketing is performing in relation to your goals.
Start by identifying your goal and then work backward from there to identify what factors contribute toward achieving that goal and which of those factors would have the biggest impact on your goal if changed.
If you want to increase organic traffic, experimenting with your webpage templates won’t have a significant impact, but optimizing pages around different keywords or restructuring how your content uses keywords may have an impact.
Give time for results to become conclusive
Whenever a change is implemented, there’s often an immediate spike in results. However, that’s just a reaction to a change occurring and not necessarily indicative of the lasting impact.
Don’t make assumptions based on those initial results. Instead, you’ll need to be patient until your experiment has gone on long enough for results to be conclusive.
For website experiments, that normally means you need to determine how many page visits you need for the sample size to be sufficient. If your website gets a thousand visits a day, the way the first 10 visitors to a new landing page respond won’t accurately reflect how it’ll perform with your entire audience.
To maximize the results of your marketing strategy, you should always be running tests and experiments. There’s always room for improvement, and the successful conclusion of one test often opens the door for new experiments to take place.
Software has greatly improved marketers’ ability to run these experiments, with tools like HubSpot Marketing Hub Enterprise enabling marketers to test five variations at a time and will even automatically use the best performing variation once conclusive results have been determined.
Guido is Head of Product and Growth Strategy for New Breed. He specializes in running in-depth demand generation programs internally while assisting account managers in running them for our clients.