A/B Testing Facebook Ads with Python – Split testing for eBook landing page [Part 1]

It has been more than 2 years since I’ve written anything over here. Feels great to write again! This is Part-1 of my series on data-driven A/B testing of Facebook ads using Python.

I’ve run thousands of campaigns from scratch, and often ideation and scalabilty was just one part of it. The other part that marketers underrate is quantitative and qualitative assessment of success/failure metrics. 

A/B testing and subjectivity

Questions like “What if you’d selected a green coloured CTA as opposed to a blue one?” often bring subjectivity. And, most of the conversion frameworks that we have bounce around the idea of subjectivity (Emotional SWOT analysis for image selection).

There’s nothing wrong with looking at heatmaps, scrollmaps, and other generic KPIs or metrics that most marketers use. But, it gets subjective. And rather pushes the load of A/B testing on the observer. If observer choses to be extremely subjective and vague, the entire campaign could fail due to a subjectively poor decision making.

Over the course of time, data has helped us split monolithic, compartmentalised marketing into domain driven, collaborative and attributable marketing.

A huge factor for marketers getting in probablistic, domain driven marketing was the ability to understand data. With probabilistic marketing, we reduce the clutter by filtering out low volume marketing initiatives, divide them into various domains and use data driven metrics to determine success and failure metrics.

I plan to cover these things in detail in a future blog post with 3-5 actionable examples. But, for this blog post, let’s stick to A/B testing with Facebook ads using Python and kick start data driven A/B testing.

Getting started with A/B Testing on Facebook ads

If you wish to understand A/B testing beyond the “this color or that color” theory, Rohan Kohavi of Microsoft’s paper on A/B testing Fundamentals is a pure gold. He also gets into Multivariate testing with Facebook ads, which is also on my list of blogs to cover in future. 

Our goal here is to be able to do A/B testing without using expensive tools. And, Python is an excellent programming language for that!

Why the hell should a marketer learn programming?

Because growth hacking isn’t driven by templates. The tools that are around you are extremely limited in terms of what they can do. Even if they are helpful, they cost you $,$$$s to show you generic stuff. Sure, they integrate well with your automation tools and CRMs, but they limit your creativity and what you can do with them.

Another reason for you learn programming is to become a true data driven marketer. You often run your ads and other campaigns in isolation. For those that are involved in high end B2B marketing, integrating A/B test results with what’s happening in real time would provide insights that otherwise requires subjective decision making (which is often flawed).

A/B testing is just an example, but learning programming can help you extend your marketing capabilities by:

– Automated benchmarking of competitors

– Determining precise segments

– Identify the best audience for creating a lookalike audience

– Identifying customer intents by leveraging social data

and much more.

Coming back to A/B testing, we will use Python here to determine which one of our ads should continue and expand for ebook’s promotion.

Our Goal  

Our goal here is to generate and increase signups for an e-Book. This is a fairly common strategy in B2B/B2C marketing where single touch attribution no longer can help convert customers. You should your target audience a piece of locked content, ask them to sign up and push them to down to your automation driven funnels and nurture them to conversion.

We have two landing pages with different coloured Call to Actions. One follows an aggressive approach towards increasing conversion, other takes an authority driven approach. We want to understand which one of these would lead to a better conversion. We have been running one ad before and the conversion rate we’ve observed is around 10%.

Our goal is to understand if we make a switch to colour theme suggestive of aggression, would it lead to higher conversions?

Setting up A/B testing with Facebook ads using Python – Framework

Here’s what we are going to do here:

  • Setup up the experiment (Treatment and Control groups)
  • Run the ads and get data from Facebook Ads Manager
  • Evaluate distribution between these two ads 
  • Evaluate statistical variables
  • See how sample size impacts the A/B test results 

Note: This data is from an actual A/B test we ran. So, I won’t be able to share the plan for actual ads, but, I will shared highly anonymized information from my Facebook Ad Manager. 

I am assuming you’ve already done a great job in identifying your target audience and you are not playing around. Still, there are a few things that you should have before we can leverage A/B tests to drive empirical data:

  • Optimized ad copies for maximum impact – No point in running Facebook ads with a generic “We are so awesome, download my eBook” ad. Identify real pains and talk about how this eBook with help them resolve that.
  • Make sure to have one design that you can A/B test on color theories. But at the same time, your Ad image should be very impactful.
  • Encrypt your UTM tags – Some marketers keep their UTM tags so identifiable that it gets easy for someone to write an Python script and generate whole hierarchy of their marketing automation. Don’t do that!
  • At the end, don’t just blindly launch your ads, use https://www.facebook.com/ads/tools/text_overlay and see if the text to image ratio is appropriate.

 

Getting back to this Facebook Ad, our goal is to see if we can achieve a 13% conversion rate as opposed to a 10% conversion rate that we’ve already observed from an Ad campaign that was never A/B tested.

The way we want to achieve this is by changing the ebook landing page b we are using for these ads. This is a Middle of the Funnel exercise where we have a deeply intent filtered audience and we want to see how well we push them down increasing their predictive lead scores. The goal is to eventually make them a SQL (Sales Qualified lead).

Data after run A/B tests

So, here’s an export from my Facebook Ads using two variants of the eBook landing pages.

 Facebook ad ab testing data

Jargon aside, let’s load this data into Python.

In order to use this code, just rename the file you’ve downloaded to “ab_testsplit.csv” so that it works perfectly with the code.

Loading Facebook Ad AB test data into Python

To load this data in Python, use the following commands

import pandas as pd
ab_testsplit = pd.read_csv(“ab_testdata.csv”)

ab_testdata.csv is your csv file that you get after exporting data from your ads manager.

Now, when you enter ab_testdata.csv on your Python console, you will get the output of  data table as I’ve show at the start of the column.

Great! That’s your first step towards being a data driven marketer.

Let’s now run an exploratory data analysis over these ad results and check the following group-wise:

  • Total converted
  • Total landing page views
  • Effective conversion rate

But, how would you generate a table that can so quickly show you the number of conversions?

If you have done this excel or SQL before, you would know that you need to generate a pivot table. But, if you haven’t done it before, there’s no need to worry about anything. Pivot table of a data is basically a summary. And, it takes less than 3 lines to generate them.

Let’s build a pivot table with three columns: Converted, total, and conversion rates.

Here’s the code that you can use

ab_testsummary = ab_testdata.pivot_table(values='converted', index='group', aggfunc=np.sum)
# add additional columns to the pivot table
ab_testsummary['total'] = ab_testdata.pivot_table(values='converted', index='group', aggfunc=lambda x: len(x))
ab_testsummary['rate'] = ab_testdata.pivot_table(values='converted', index='group')
ab_testsummary

The code above generates the following pivot table

Facebook ad data ab testing pivot table 

Note that our actual test results with the variant were lower than the original conversion rate of 12%. The modified landing page converted to ~ 11%.

Now, don’t just jump on the bandwagon to kill your ad yet!

Although we see that our conversion rates are actually down by 1%, our goal is to generate enough evidence to say that one design is better than the other.

If we try to take a look at our data, we have three different options to evaluate statistical significance:

  • Evaluate assuming a Normal distribution
  • Evaluate assuming a Poisson distribution
  • Evaluate assuming Binomial distribution

Normal distribution vs binomial vs Poisson distribution for A/B testing?

If you are new to it, here’s what a normal distribution looks like

Normal distribution vs poission distribution vs binomial distribution for Ab testing facebook ads 

We call it a normal distribution because, it represents the nature of distribution that we commonly see occurring naturally. IQ distribution, height distribution, etc are a really good examples of where you can actually use normal distribution to establish statistical significance. 

Clearly, we can’t use Normal distribution in our situation as it has nothing to do with dual outcome scenarios (click or no click).

Poisson distribution on the other hand focuses more on time intervals and rarity of events. But the focus of approximation between zero to infinity.

In this case, as we have finite number of clicks (events), we need a distribution that can be approximated for zero and the number of trials we have in our dataset – i.e. Binomial distribution.

So clearly, you can go ahead with Binomial distribution that helps you get a much more clearer picture of the AB testing.

There’s an entire science behind deciding how to understand this, which starts from the distribution and goes all the way to statistical tests. 

So far, we learned two very important fundamentals of A/B testing facebook ads:

– How to import ads data using Python

– How to select the right distribution for A/B testing assessment

I will end part 1 here, and will cover rest of the testing in future blogs.

Meanwhile, if you have any questions, feel free to reach out to me on Twitter (I’m @parinfuture).

About the author

Parikshit Joshi

I am a Coffee-addict-with lost counts of consumption. Apart from this addiction, I have a deep passion for data and connected architectures. I ingrain technology into business processes to bring the next level disruptive solutions.

View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>