Tag(s): Sales Demand Generation
Successful marketing results in revenue
Inbound sales powered by unmatched insight
Deliver the experience your customers deserve
People, processes and platforms — simplified
Turn your website into your best salesperson
Custom applications to improve your tech stack
Read our latest content across marketing, sales and success
Leverage these resources to help your business grow
Stream expert insights straight from our team
Learn the best practices for growing your company
Tune in and learn how to add value to your business
A/B testing emails can be critically important to help understand and improve the results of your email marketing. Unfortunately, for many marketers this functionality isn't native to their email send platforms. Many of our customers who don't have the Enterprise level of HubSpot run into this problem. That doesn't mean that they shouldn't benefit from the functionality of A/B testing, it just takes a little more elbow grease to make it happen.
A/B split testing is a great way to determine the best promotional and marketing strategies for your business. A/B is fairly straight forward: you test a control (version A) again a different version (B) to measure which is the most successful based on the metric you are measuring.you test two versions of the same asset, have it be an email, call-to-action or landing page, and see which one had better results. Most times, the metrics you are measuring are click-through rate and conversion rate.
This method of testing provides insights to what your visitors, prospects, and even customers prefer to experience in your marketing. Therefore, you can hone in what works and optimize further to yield greater more qualified results.
There are two primary ways we recommend our clients leverage these A/B tests, that is, depending on the size of the email list the email is being sent to and their historical email marketing conversion rates. Generally speaking, we follow Hiten Shah's rule of thumb that in any sample we need to achieve 100 conversions to achieve statistical significance and we want to ensure that we'll reach that in our test.
The easiest is to use a 50/50 split of the entire email list where one cohort receives Version A, the other Version B. A slightly more complex but sophisticated method would be to send both Version A and Version B to a smaller percentage of the list, for example 10% each, and then send the winning version to the remaining 80% of the initial list that has yet to receive the email. Just keep in mind that whatever smaller percentage the email is sent to is large enough that your results will be significant enough to draw meaningful conclusions from.
Matthew Buckley is a former New Breeder.