As digital marketers, we’re taught to always be testing. A/B test images, copy, headlines, and so on. Testing images on Facebook can be an interesting process. Often, we find clients might have used a ton of different creatives with different copy variations without a clear result of what is working and what is not.
Here are some tips on how we A/B test Facebook ad creatives to determine what works best and what doesn’t.
When testing creatives in campaigns, it’s helpful to stick to general themes. One test we might start off with is lifestyle images vs product shots, to see if one or the other tends to resonate better with the audience.
Once we determine if one performs better, we’ll create more iterations based on that to test.
Future themes we might consider testing next: a combination of lifestyle and product, illustrations, text vs no text, animated vs still, video vs image, and so on.
In order to properly isolate and test our creative variables, we keep the same headline, body copy, and description the same. We ensure all of the creatives remain the same across each ad group in the campaign.
If we have GIF or video assets available, we can test these at the same time with the images and keep the copy the same. Down the line, we can see if images or video performs better overall, then tweak the copy or theme on the video/image to begin the next round of testing.
The important thing to remember here is that we want to launch all of the creatives at the same time.
There’s also the option of using Facebook’s A/B Test Tool. With this, users can choose to test different variables against each other, such as images, audiences or placement.
If implemented, the budget will be split evenly between the two variables.
Structured ad names are essential to staying organized in testing.
For Facebook ads, we follow the naming convention of “Media Type_Creative_Headline_Body-Copy_Description.”
For example, an ad name may look like this: “Image_Lifestyle-5_headline1_text4_desc1.” When we pull the metrics, it’s much easier to tell what media type and combination of copy we’re evaluating.
Some additional tips:
If there are multiple images that are similar, at more detail to the name, or assign numbers to each image.
Create a library of copy variations in a Google sheet as a key for quick reference, and to keep track of which ones have been used.
Once the ads have gone live, make sure to check back in to see if the ads are delivering and running properly. It’s important not to make any unnecessary changes once the ad is live, as we want to let it run for a few days so Facebook’s algorithm can learn and normalize before you collect data.
Sometimes Facebook will favor the spend toward a specific ad too early before the other ads get a good chance to deliver. If that happens, you can pause those ones for a few days to force spend on the other ads.
Once those ads have spent enough, we can determine which had the best performance. A good rule-of-thumb is to let it spend at least 2-3x the benchmark CPA before making any decisions.
There are plenty of metrics to review once all of the ads have had a similar amount of spend.
These are the initial metrics we evaluate: cost-per-acquisition (CPA), click-through-rate (CTR), and cost-per-click (CPC).
Cost-per-acquisition is the most important factor. If an ad has spent 2-3x the amount of the benchmark CPA for Facebook without bringing in any conversions, it’s fair to say that ad wasn’t successful for that particular campaign or ad set.
While it’s good to look at all of the ads from a high level, we also look at performance at a per-campaign level to see if we see any significant insights.
There are so many ways to slice and dice the data. Here are a few other ways we look at performance to gather insights:
Something to keep in mind is to not make blanket statements about the overall results. For example: If a product image did not have good results, it does not mean that product images won’t ever work for the company. It simply did not perform well in that test, but can be tested again on different platforms, with different copy, or with different audiences.
Once we determine the winners on the campaign or ad set level, we allow those to continue running and shut off the poor performers.
We use our findings to educate our next round of creatives.
For example: Perhaps lifestyle imagery had better conversions, so we'd test more lifestyle imagery with the same image copy for the next round of new audiences.
However, if an ad has good performance, there’s no need to introduce new creatives to the campaign just for the sake of adding new creatives. We upload new creatives once we start seeing ad fatigue on the previous ads.
If we want to test the body copy, headline or description next, we can use the same winning creatives and change the asset we want to test.
As we start to gain insights and find creative winners, we can take these results and start testing them on other platforms.
If we were also running LinkedIn or Google Display Network ads, we could resize and reformat the initial creatives and begin testing with those. But we also don’t want to assume the losing creatives from Facebook and Instagram will perform poorly on other platforms too -- we always want to test everything. What might not work in one place might work great elsewhere.