1. Business Ideas
  2. Business Plans
  3. Startup Basics
  4. Startup Funding
  5. Franchising
  6. Success Stories
  7. Entrepreneurs
  1. Sales & Marketing
  2. Finances
  3. Your Team
  4. Technology
  5. Social Media
  6. Security
  1. Get the Job
  2. Get Ahead
  3. Office Life
  4. Work-Life Balance
  5. Home Office
  1. Leadership
  2. Women in Business
  3. Managing
  4. Strategy
  5. Personal Growth
  1. HR Solutions
  2. Financial Solutions
  3. Marketing Solutions
  4. Security Solutions
  5. Retail Solutions
  6. SMB Solutions
Grow Your Business Sales & Marketing

Using A/B Testing to Boost Marketing Results

Using A/B Testing to Boost Marketing Results
Credit: Aysezgicmeli/Shutterstock

One of the most critical questions a marketer has to answer is what makes customers take action. What makes someone open a marketing email, click on a website and ultimately make a purchase? Rather than just guess and hope for the best, smart companies will use what's known as A/B or split testing to find out exactly what drives conversions in their marketing campaigns.

When you run an A/B test, you're comparing two different versions of a campaign — whether it's a marketing email, a banner ad or just a website page — to see which one is more effective with your target audience. Mohita Nagpal, a marketing specialist and author of a Visual Website Optimizer (VWO) blog post about A/B testing, compared the process to a scientific experiment that requires rigorous testing of a hypothesis.

"Do some background research by understanding your visitors' behavior using Google Analytics or any other analytics tools," Nagpal said. "The next step is to construct a hypothesis. An example could be, 'Adding more links in the footer will reduce the bounce rate.' Then, test out the hypothesis [by comparing] the original version against this new version without the footer."

In an infographic accompanying her VWO blog post, Nagpal outlined a few basic steps to running a split test: [How to Create an Effective Marketing Plan]

  • Make a plan. Determine your goal, such as improving conversion rates or getting more repeat purchases.
  • Pick a variable. Based on your research, choose an element of the site or campaign's A version to alter in the B version.
  • Run your test. Roll out the two different versions to your test groups for a period of up to two months and collect data on how many users took action.
  • Analyze the results. If you found low conversions on one or both versions, determine which element — copy, calls to action, images, etc. — may have caused friction or prevented users from following through. This is the element you will need to adjust when you run the final campaign. You should also look at your test as a whole to make sure your results are sound. A poorly constructed test or one with too many variables may produce a misleading outcome.
  • Implement changes, then repeat the test. Running the test again in a few months will either prove that your changes worked, or show that there was another factor affecting your initial results.

Split testing can be a very helpful tool, but if you don't utilize it properly, you may end up with results that are way off base. For instance, some marketers make the mistake of making versions A and B too different from each other. If you really want to drill down on the specific factors that lead to higher conversion rates, you should only test one element at a time, said Anil Kaul, CEO of intelligent analytics company Absolutdata. Yes, it will take longer, but you'll get a clearer, more useful data set that can better inform future campaigns.

"If you change your subject line and at the same time you change your CTA [call to action], it's difficult to determine which one of the parameters contributed to the most conversions," Kaul told Business News Daily. "By testing one parameter, you get a clear picture of the changes you need to make and which one would be the most optimized [version]."

You should also make sure you're running your test long enough to get useful results. Kaul noted that, to get an accurate reflection of what will happen when you launch the final campaign, a good A/B test should run for at least seven days. Most times, one week is long enough to reach 95 percent statistical significance, but if you haven't reached that point, continue to run the test until you do.

"One should only be certain whether option A is better than option B when a certain level of statistical significance has been achieved," Kaul said. "Testing can only prove to be impactful when you stick to the numbers.

"Stopping a test too early will distort the results, and decisions based on incomplete data are almost always bound to fail," Nagpal added.

For more information on statistical significance and how to calculate it, visit this blog post on HubSpot.

Ready to run your A/B test? Here are a few tips to make your experiment go smoothly.

Get the designs right. No matter what variables you're testing with your experiment, it's important to make sure both versions have seamless, visually appealing designs. This will rule out overall design as a factor and make sure that end users are focused on the elements you want to test, such as different images or ad copy.

"Make sure that your versions are mobile-ready, functional and provide a great user experience," said Leeyen Rogers, vice president of marketing at online form builder JotForm. "Buttons should be the right size and easily clickable from any device. Fine print and all text should be readable and the design should make the call to action clear."

Eliminate "noise." Nagpal said that "noise" is any outside influence that skews your data. For instance, your company launches a campaign for a free e-book that receives 1,000 downloads in a month, but you realize that one-third of the leads came from a group of students at a certain university. Since these students were not part of your core target audience, they are "noise," she said.

"Though a lot of these factors are beyond your control, you can do a few things to control noise at your end," Nagpal said. "Segmenting your data will help in this regard. Using an analytics tool like Google Analytics will help track which elements or pages are working and which are underperforming. This means you can identify the pain points for users that might hinder engagement or conversions, and quickly plug holes in your marketing strategy."

Similarly, Rogers noted that some of the data you collect isn't statistically significant, so it's important to look at it with "a healthy dose of common sense."

"To illustrate an extreme example, just because it looks like more people clicked 'sign up' on an ad that had a typo doesn't mean that you should include the typo," Rogers said.

Focus on conversions beyond click-through rates. Split testing is most commonly used for email marketing campaigns, to see which version of an email results in the most click-throughs. Kaul noted that it's just as relevant to analyze conversions that happen outside the email so you can properly gauge your users' experience with your website.

"It's important that the message you give in your email is consistent with the message on your website," Kaul said. "If you're promising your visitors a special deal, and that deal isn't perfectly apparent on your website, then you're going to lose customers. The same can happen if your email doesn't echo the look and feel of your website. Visitors might get confused, and wonder if they've landed on the correct page. [If you] find that one email gets more click-throughs than the other, but doesn't result in as many conversions ... do more testing to see if you can get an email that not only results in higher click-throughs, but also higher conversions."

Don't stop testing. When you've thoroughly analyzed your A/B testing data, move forward with the campaign that makes the most sense. However, remember that your conclusions aren't set in stone — it's important to keep running tests over time to see if changing trends affect your results.

"A/B testing shouldn't be a one-time experiment," Kaul said. "Use it to continually adjust and improve your marketing."

"What was in style a couple years or even months ago may not be in style now," Rogers added. "Different design trends may lose popularity, or something more effective or different may come to be preferred. Testing and improving is a job that is never fully done."

Nicole Fallon Taylor
Nicole Fallon Taylor

Nicole received her Bachelor's degree in Media, Culture and Communication from New York University. She began freelancing for Business News Daily in 2010 and joined the team as a staff writer three years later. She currently serves as the assistant editor. Reach her by email, or follow her on Twitter.