Skip to Content
Article

What To Optimize

by Sam Tomlinson
July 16, 2023

Recently, I’ve received a number of questions from readers on the topic of optimization priority – basically, how do you prioritize your tests? When does a top-of-the-maze test make sense vs. lead capture or sales optimization? Does it make sense to keep pushing new creative tests if you already have high-performing ads, or should your focus shift to something else (landing page, audience, bidding strategy)? 

My framework for approaching this question breaks down into three principles: 

Principle 1: The myth of the better thing

There’s a pervasive belief among marketers that we can always make a better thing: a better evergreen ad, a better landing page, a better offer, a better purchase/lead capture experience, etc. 

It’s a myth. 

At some point, you’ve made the best ad you’ll ever make. Testing against that ad, mathematically, is a losing proposition. 

To illustrate why, consider the following: 

The performance of *any* marketing asset tends to follow a normal distribution (the bell curve above) – the majority (~68.4%) of instances (ads, landing pages, etc.) will perform within 1 standard deviation of the mean; the vast majority (~95%) will perform within 2 standard deviations, and nearly all (~99%) within 3 standard deviations. 

This is best articulated by the Law of Large Numbers, which states that as a sample size grows, its mean approaches the mean for the population. From an advertising standpoint, as the number of ads (or landers, or audiences) tested grows, the mean of their performance will approach the performance of all ads (or landers, or audiences) on a given platform. 

Practically, that means that if you have a high-performing ad (we’ll call it 90th percentile), for every 100 ads you test, only 10 will perform better; 90 will perform equivalently (1) or worse (89). The same applies to landers, audiences, etc. 

The better your brand is  – the higher up the performance mountain you’ve climbed – the more likely it is that the next step you take (i.e. the next test you run) will result in you ending up lower on the mountain than when you began. 

Principle 2: The cost of testing

The natural reaction to the above from most marketers is, “Great, that’s 10 better ads I can create!” Marketers, owners & operators are optimistic by nature. On the whole, that’s a good thing, as it drives us to take risks + push forward despite headwinds. 

But in the case of testing, it can blind us. 

Take the following two examples:

In the above example, creative testing results in substantially better performance for the brand overall (as measured by Cost/Conversion), even though 3 of the test creatives perform worse than the original; the discovery of a “standout” creative (#5) propels this test from mediocre-to-failure to success. 

Put another way, if this advertiser were to have not run any tests, and simply used the “original” creative for the full spend ($61,053), the brand would have received 715 conversions – 63 fewer than the brand actually obtained in this example. Testing an average-to-above-average ad can make a ton of sense. 

But, what happens when you already have a high-performing ad?

In this example, the brand in question starts with an extremely high performing “original” ad (~91st percentile), and the test actually *costs* the brand 459 conversions (had all $61,053 in spend been applied to the original, the brand would have obtained 1,343 leads vs. the 884 actually obtained). The brand in question here has an expected value of $324 per lead (closed deals are worth $1,012 in net present dollars, and convert at a ~32% rate), meaning this test had a cost of $148,642.  

This is a concrete illustration of the principle from Part 1: the higher up the performance mountain you’ve climbed, the more likely it is that the next step nets you lower, not higher. 

The above is broadly applicable to anything you’re testing – whether that’s an ad, a lander, an audience, a keyword, an email/SMS flow and the finding holds true. 

A note: I’ve held CPM constant for illustrative purposes, and deliberately disregarded the indirect costs associated with testing (ad creative, project management time, analysis time/fees, consulting fees, etc.), as well as the opportunity costs (more on that below). I’ve also made an assumption about continuity of performance (ad performance does degrade over time and with exposure), though in this case (a ~1 month period), that degradation is negligible assuming a broad audience.  

Principle 3: Maximizing expected return

So, what should you do? My answer: maximize your expected return on the test. This requires a bit more effort than simply running more tests, but it results in much better outcomes for your organization. 

Start with benchmarking your full-funnel performance: 

  1. Ad Creative
  2. Audience
  3. Lander

Meta makes this easy with the Ads Benchmarking Tools; if you don’t have access, there are a number of resources online that can give you an idea of where your performance falls. If you can’t find exactly what you’re looking for, pull data from your ad account. Here’s a quick-and-dirty walkthrough for using excel to identify outliers. 

Then, forecast your actual test results. I do this in excel, and try to keep it pretty basic (there’s an opportunity cost to forecasting, too!). 

First, in the case of a relatively average ad (~2.05% CTR) and lander (~2.00% CVR), just about any reasonably-well-designed and sufficiently-large test will produce a meaningful improvement: 

In this case, TEST both in whatever order you prefer (I would go creative first, then lander, but that’s me). 

However, in the case where you have an existing high-performer (the ad with CTR of 3.85% in the below), you can see how the lander test adds value (+109 conversions) to the business, while the creative test does not (-459 conversions). In this example, the choice is simple: run the lander test, leave the creative alone. 

This same process can be applied to just about anything – audience testing, keywords, email/SMS flows, offers, etc. 

Note: I’m not saying don’t ever test creative if you find a banger ad. In many cases, it makes sense to know if you’re solving for a local or global maxima, so a bigger-swing test could make a ton of sense. But be sure that you’re going into the test with eyes wide open and crystal clear expectations. 

Testing isn’t free

I’m a huge proponent of testing. Done well, it can be a game-changer. But for that to be the case, it must be done strategically. That starts with doing your homework on the expected results, and prioritizing your tests based on which one(s) are most likely to maximize your (or your client’s) expected return. 

That starts with remembering that testing isn’t free, and testing the wrong things can have staggering costs for you or your client. 

Related Insights