March 11

A/B Testing in Product Management: Definition & Overview

Data is critical to the decision-making process of product managers (PMs). And there are so many ways to gather data—from qualitative feedback to usability studies.

One time-tested tool for gathering real-time user insights is A/B testing. A/B testing in product management allows PMs to test specific elements of the user experience, such as copy, user interface, or design. A/B tests are popular because they provide immediate and easy-to-analyze insights, and they reflect actual user interaction (instead of theoretical actions).

What is A/B testing in product management?

In this article, we’ll first break down what A/B testing is, then we’ll focus on how to learn A/B testing for product managers.

A/B testing definition

A/B testing (also called split testing) is an experiment meant to determine which variation of a variable drives the most impact. For example, you might run an A/B test to determine which landing page layout drives a higher click-through rate (CTR) or lower bounce rate. 

A/B tests require:

  • Two or more variables (e.g., design, call to action, messaging) to test
  • A randomized audience (so as to get the most unbiased results)

A success metric (e.g., conversions, clicks, time on page)

Why is product A/B testing important for product managers?

For the most part, A/B testing has been used in marketing and advertising to find the ideal combination of variables that perform the best. But A/B testing can be used by product managers as well. 

Often, product experimentation happens in the design phase. Research participants are shown a mockup and asked to interact with it or provide feedback. With A/B testing, product managers can launch multiple versions of a layout or design to figure out which works better. Not only does this save time from one-on-one research, it also provides objective feedback on which performs better. With design-based research, people are reacting to a concept or idea, not a live product. With A/B testing, you gain insight on what works when the design or layout is put into the hands of real users.

How to do A/B testing in product management: 5 tips to nail A/B test in product management

Trying to find the right design for your product but running into conflicting opinions? An A/B test can provide you with tie-breaking user insights. Here are five steps to leverage A/B testing for product management:

1. Develop a hypothesis

The first step to successful A/B testing is to develop a hypothesis. Think back to when you learned the scientific method. A hypothesis is what you expect to happen. A good hypothesis is something you can prove either correct or incorrect.

Three questions to ask yourself when developing a hypothesis:

  • How do you think users will interact with each variation? 
  • What do you expect the results to be?
  • Why do you think that?

Let’s use the example of call-to-action (CTA) button color. If your current button color is green, you might want to test that against a different color, like orange. This is what your hypothesis would look like: “If we use an orange CTA button, we’ll see an increase in conversions since many other SaaS companies are seeing orange buttons outperform other colors.”

If you don’t have a clear hypothesis or rationale for your hypothesis, it might not be an element you’re ready to test.

2. Decide if the test is worth it

Now that you know your hypothesis, think about the potential impact. Will the results of this test drive significant results? If not, is there a different test you could run that might make more of an impact?

While the motto “Everything is worth testing” is a great one, it’s also worth thinking about effort vs. impact. If the test will take time and effort, but not measurably move the needle, it’s probably not worth it.

3. Design the test

Once you’ve established your hypothesis and decided it’s an impactful test, it’s time to design your test. 

You’ll need to determine the following:

  • Who is the audience for this test? This could be a subset or all of your user base or prospects.
  • How long will you run the test for? You may be short on time, which can be a consideration for determining the length of your test. But you usually want to run a test until you see a difference that has statistical significance.
  • What will the test look like? 
  • What tool will you use for testing? (Check out our A/B testing tool recommendations below.)

With this criteria in mind, you can build and deploy your A/B test.

4. Run the test

This is the fun part! It’s time to push your test live. Once the test is up and running, you can start thinking about other tests you might run and get those in your backlog.

5. Measure the results

Once you’ve hit the predetermined length of time—or have a large enough sample size—you can measure your test KPIs.

Since A/B tests are only two variables and drive equal traffic to both, you’ll simply compare one to the other. Which one drove better results?

In the case of our CTA button, we might want to look at two different measures of success: Clicks and subsequent conversions. You may find that just because the initial measure of success (a click) was higher doesn’t mean that the downstream impact (conversion or interaction) is higher.

7 product management A/B testing tools

A/B testing tools can do a lot of the heavy lifting when it comes to running the actual split test. 

When it comes to A/B testing for product managers, here are some of the most popular tools you can use:

VWO

VWO is a conversion optimization tool that’s great for A/B testing. VWO lets you run multiple tests at the same time by allocating traffic to each test. This reduces interference and allows for cleaner data. VWO also allows you to run tests based on a user’s on-site behavior. For example, VWO can automatically enroll someone in a test once they click on an element of your site or based on the amount of time they’ve spent on each page.

A key benefit to VWO is its asynchronous code, which reduces loading times while tests are running. 

Google Optimize

Google Optimize is a great free option for A/B testing. If you have Google Analytics set up on your web-based application, you can use that data to identify areas of your product that need improvement. From there, you can test and measure different versions. Google Optimize uses Bayesian statistical methods to model the real-world performance of your experiments.

Why would you not use Google Optimize? To get the most out of the platform, you need a certain amount of technical prowess. 

Optimizely

Optimizely is a powerful tool for A/B testing that was built for enterprise organizations. Optimizely uses a visual editor to make getting your A/B test up and running simple—even without developer support. Exclusion groups allow you to safely run multiple experiments at the same time. Optimizely’s program management functionality allows you to scale experimentation and collaboration by collecting ideas, prioritizing projects, and managing experiments across your organization.

Omniconvert

Omniconvert is a conversion rate optimization tool for developers, startups, and ecommerce companies. It uses CSS and Javascript to give you complete control over variation code. You can also reuse code between variations to make it easier to run new experiments, or leverage cookies, data layer attributes, or UTM parameters for advanced segmentation. 

A big differentiator for Omniconvert’s CRO tool (Explore) is that it uses a CDN cache bypass, allowing you to run live tests immediately. Due to its technical nature, Omniconvert works best for product teams that have a developer who can help create and deploy A/B tests.

LaunchDarkly

LaunchDarkly allows you to continuously improve your software by gathering data on the impact of new features.” 

Something that’s unique to LaunchDarkly’s model is that they make experimentation part of the normal delivery process. LaunchDarkly builds A/B tests into feature flags. Each time you launch a new feature, it’s wrapped in a feature flag. You can launch different variations and LaunchDarkly will identify the winning variant, then you can roll out the ideal version to all instances. 

Inspectlet

Inspectlet is a lot like the other A/B testing tools in that it’s easy to install via a short Javascript snippet, you can build tests using a visual editor, and it tracks your experiment parameters. What’s neat about Inspectlet is it also includes session recordings and dynamic heatmaps. These tools will help you understand the answers to these questions:

  • Are there sections users are spending a lot of time on? 
  • Where are they clicking? 
  • Where are they scrolling without stopping? 
  • Where are people getting confused or frustrated?

These insights can help you pinpoint areas for testing. 

AB Tasty

AB Tasty is another A/B and multivariate testing tool. The functionality is a lot like the other A/B testing tools above, with two standout features:

  1. AB Tasty sends experiment performance data directly to your analytics tool for easy analysis.
  2. AB Tasty offers an ROI dashboard that tracks the impact of your tests compared to the original. This helps you attribute gains in revenue to your experiments—making it easy to get buy-in from stakeholders for the next experiment.

Key Takeaways

If you’re a product manager, run experiments regularly. A/B test results can help you improve the customer experience in your current product—and make sure new product development meets user’s expectations. Try leveraging an A/B testing tool to make experimentation quick and simple. 

Shannon Howard

About the author

Shannon is a freelance writer tackling topics like product, marketing, talent optimization, and customer education. She spent some time in both digital and growth product management and now works in customer marketing.


Tags


You may also like

Deep-Dive: Product Engagement

Deep-Dive: Product Engagement