Skip to Content
Article

Why More Ads Is A Good Thing

by Sam Tomlinson
May 13, 2024

Let’s talk about the number of active ads in your ad account. There’s a particularly prevalent notion among agency + in-house people that having too many active ads in your Meta Account (or Google Account) is a problem. Related to this, I’ve seen (first-hand) more brands pushing to evaluate agencies/freelancers (and even in-house teams) based on the number of creatives generated each month.

This is all fundamentally flawed. 

To understand why, let’s start with three first principles: 

Principle #1: Smart bidding strategies (like all of those on Meta, and the vast majority of those on Google) leverage Machine Learning (ML) to predict, using a probabilistic model, which impressions, ads, assets and audiences are most likely to result in your desired outcome. We can define a “desired outcome” as a specific result at or below a particular cost or cost ratio. 

Principle #2: Machine Learning (ML) is – essentially – pattern recognition at scale. ML algorithms ingest vast quantities of data (for example, all the ads live on Meta or Google) to understand patterns across them, and use those patterns to improve the predictions made in Principle #1.  

Principle #3: Humans are dynamic: our preferences, likes, desires, fears, challenges, tastes, communication styles all evolve as we learn and experience more (which the internet enables + accelerates). Social media networks – and by extension, ad platforms – simply respond to these changes. 

So, what do all of these principles have to do with whether or not you should be turning on or off ads in your ad account? It turns out, everything: 

We’re playing their game. 

Principle #1 is a restatement of what I (and many others) have said for over a decade: ad platforms determine the expected value of an impression to your business based on four core factors: (1) your business’ budget + economics; (2) the individual who will be served the ad, (3) the creative available to the platform and (4) the behavioral patterns available to the ad network (more on this one later).

At a basic level, ad platforms use those core factors to determine the Net Expected Value (NEV) of a given impression via a real-time auction. NEV can be defined as the Expected Value (EV) – Cost, and Expected Value can be defined as: 

EV = [Probability of Conversion *  Probability of Click] * [Value of Conversion] * [Ad Quality]

In any ad accounts, cost is constrained by the smallest of three factors: (1) the overall account budget + spending limits (daily, lifetime, monthly), (2) the target (if you’re using a cost cap, bid cap, tCPA or tROAS) and (3) any set bid limit (i.e., a “Max CPC” in Portfolio bidding).

When you consolidate this, you end up compelling arguments for two claims: 

  1. Why you should always have (at a minimum) targets in your ad account 
  2. Why you should never turn off properly-configured* ads

Let’s take those in order: 

As an advertiser or business, your primary objective is that every impression has a positive Net Expected Value. That doesn’t mean that you’re going to get a sale on every impression (that’s not how probability works), but it does mean that the expected return on each impression should be positive in order for your advertising to produce a positive contribution margin. 

If it doesn’t, your ad platforms will end up siphoning off contribution dollars + slowing your business’s growth and/or profitability. Put another way, in a situation where your advertising does NOT produce a positive NEV, the ad platform becomes a casino. 

To illustrate what that means from a financial standpoint, let’s use a game we’re all familiar with: Roulette. 

In an American Roulette game, there are 38 total spaces (18 red, 18 black, 2 green), and a successful bet pays out 35:1. So, the expected value of $1 bet on any given number (we’re not going to get into the more complex bets) is: 

EV = [(1/38)*($35+$1)] = $0.9474

And the NEV = $0.9474 – $1 (Cost of playing) = -$0.0526

Basically, you should expect to lose 5.26% of your bet every single time the wheel is spun. And while you could place $100 on 13 and hit (thus making $3,600) on a single spin, if you repeated that $100 bet 1,000 times, you’d expect your total losses to approach ($5,260.00).

Returning to the business example: if your NEV is negative, then some portion – whether it’s 1% 5.26% (like the above example) or 10% or whatever of each dollar deployed is effectively lost – it goes to Meta or Google or Pinterest.  To be perfectly blunt, it is in the ad platform’s best interest for your advertising NEV to be slightly negative – just as it is in a casino’s best interest for your NEV on a game to be slightly negative (there’s a reason why casinos tend to have house edges of ~2% to ~7% in most games): it’s low enough to keep you playing. 

The same is true for ad platforms: it is in their best interest to create an ever-so-slight negative expected value, such that you continue spending money on the platform. 

Now, way back in the day (when I was just a wee highschool student), I wrote a series of equations that predicted the range of where a roulette ball was most likely to land based on the angular velocity of the ball + its relative position at two moments in time. Math was complicated, but it worked well enough to (effectively) remove ~75% of the board  75% of the time. Obviously casinos don’t allow such technology to be used on their games, because look at what happens to the NEV when you do so: 

EV = [((1/10)*($35+$1))*(75%)) + ((1/38)*($35+$1)*(100%-75%))] = +$2.9368 

NEV = $2.9368 – $1 (cost of playing) = $1.9368

Everything flips – instead of losing $0.0526 on every spin, I make $1.9368. Roulette goes from a game where the house makes money to a game where I (the player) print money. 

I flipped the expected results by modifying the expected value expression, which is one possible option. The other option – and why targets + caps are critical – is to modify the cost while holding EV constant. That’s exactly what targets do. 

For any digital advertising platform, a target (whether soft, like a tCPA or tROAS or Cost Cap, or hard, like a bid cap or Max CPC) is a constraint on acceptable impressions. It restricts the “games” where your ad will play to the ones where NEV is positive relative to your target. 

Advertising platforms even tell you this in their bidding strategy disclosures. Here’s Meta’s explanation of how Highest Volume / Highest Value (the default strategy) works:  

If you want the tl;dr: we’re going to spend your entire budget regardless of performance. Any impression available with a cost less than your set daily or lifetime budget is an acceptable impression. 

But if you add another constraint – a bid cap, a cost cap, a tCPA, a tROAS – all of a sudden, that’s no longer the case BECAUSE Meta/Google/whatever now needs to determine the expected value of each impression and compare it against the target you’ve set: 

Let’s assume you’ve told Meta you want to acquire customers at a cost per acquisition of $100. Functionally, this means the following: 

Expected Value [(Cost of Impression) / (Expected CTR * Expected Conversion Rate)] <= $100. 

We can rearrange this to the following: 

Cost of Impression <= Expected Value [$100.00 * (eCTR * eCVR)]

This changes EVERYTHING about how ad platforms determine which impressions to serve. 

If we were to categorize every available impression into three buckets (and color-code them), you might do so as follows: 

  • winners = impressions with an NEV well above your target
  • on-target = impressions with a NEV at or around your target
  • losers = impressions with a NEV well below your target. 

What you’ll notice is the volume of red – the losers – is significantly larger than yellow or green (on-target + winners). Logically, this makes sense: only a relatively small percentage of users are ready to buy any given thing at any given time. And while individual users may move around categories (i.e. I’m much more likely to be in the “green” for Titleist and much less likely to be in the green for AG1), and the relative size of each section may change for brands (i.e. Amazon probably has a larger green bubble than your start-up), the overall principle remains consistent: there are more potential “loser” impressions than “winner” impressions.  

Now, since none of us have an unlimited budget (except, perhaps, TEMU) – we’re only ever going to be able to capture a miniscule fraction of the total universe of available, targeted impressions. But what is interesting is how using (or not using) a target changes the distribution: 

Same universe of potential buyers. Same budget. Vastly different distributions. This is a visual portrayal of the math above in action. When the ad platform is given more stringent constraints, it is forced to alter the distribution of impressions provided to the advertiser in order to satisfy the constraint. Further, since the overall population of winners (green) is smaller than losers (red), ad platforms are going to deliver a greater share of “winners” to accounts with constraints (since those accounts will value those impressions higher) and a greater share of “losers” to accounts without constraints (since those accounts don’t care). 

It’s an order-of-fill problem: every ad platform’s objective is to maximize total spend on-platform while satisfying each advertiser’s constraints. The platform knows that the only way to capture as much of a constrained advertiser’s budget as possible is to give a greater share of high-value impressions; that isn’t the case with the highest volume advertiser – the platform can give them lower quality inventory and still have satisfied the constraint, while providing an acceptable return (remember from above – if the house edge is too high, people stop playing). 

In some cases, there are no impressions that satisfy the constraint, and so no ads are served. This is a GOOD thing. It means the constraints are working as intended. 

OK, so how does this tie into not pausing ads? 

If you’ve made it this far, you’re probably wondering how everything from above ties into pausing or un-pausing ads. 

For that, we need to add in Principle #2: Machine Learning is – fundamentally – pattern recognition at scale. Every ad platform has access to more data – more ads, more performance data, more user data, more historical data, more website data – than any individual advertiser. Meta – for instance – can predict the expected performance of my creatives far more accurately than I can, because Meta has access to the performance data of millions of creatives across billions (or trillions) of impressions. Meta (or Google’s) machine learning models can identify and match patterns in my ads to similar patterns across their entire library of ads, then use all of that data to determine the expected performance – the expected value – of that particular creative being served in that particular impression slot to that particular user. 

Now – here’s where things get interesting. Based on what we already know, an ad’s expected performance is determined based on four factors: 

  • Your business’ budget + economics
  • The individual who will be served the ad
  • The creative available to the platform
  • The behavioral patterns available to the ad network

If we have already placed constraints on the platform (i.e. cost caps, bid caps, tCPA, tROAS), then we know that the platform will only serve our ad if the following is true: 

Cost of Impression <= Expected Value [Target * (eCTR * eCVR)]

It is now in the platform’s best interest to maximize eCTR + eCVR, because any  increase to either of those terms, by definition, increases the maximum acceptable bid you can pay for an impression (and thus, increasing their revenue). 

You can see where this is going: the constraint has already protected the advertiser’s downside – the platform can’t serve impressions with a negative NEV relative to the advertiser’s targets. The only way for the platform to make MORE money from that advertiser is to find + serve the best ad possible, so it can maximize the cost the advertiser pays for that impression.

If an ad in the account is not expected to perform well, it won’t serve. If an ad is expected to perform well, but another ad is predicted to perform even better, it is in the platform’s best interest to serve the even better ad, because that’s what maximizes the cost of the impression. 

To put it bluntly: having more ads in an account using a constraint account asymmetrically increases the advertiser’s upside with little-to-no downside (the downsides being that there’s more to manage in the account).

What about Principle #3? 

There’s a phenomenon in many of our accounts where some high-performing ads go “cold” (and stop spending), while others that may have not been served for weeks, months, sometimes even a year – suddenly spike. I often hear from clients when we discuss creative concepts, “Well, we had an ad like that a year and a half ago, but it didn’t work so we turned it off.”

People change. The way people interact with ads changes. The types of ads people respond to changes. 

That ad that did not perform or serve 18 months ago could well have been early – people weren’t used to the concept, it wasn’t a style that the ML had associated with a pattern of performance. This is critical – so critical it is Principle #4.

Principle #4: Risk has a cost. 

All things being equal, platform algorithms will serve the ad matching a pattern with stronger or more stable historical performance (across their entire data set) than one that does not match a pattern, or has a weaker or more volatile ecosystem-wide pattern of performance.

The impact of Principle #4 is magnified in accounts with constraints, because the platform is being held to a tighter constraint than its internal, acceptable return target (which, like a casino, is likely negative for the advertiser). For what it’s worth, this is why I recommend allocating a 10% – 20% testing budget, because you can then test assets without a constraint to provide the ML with sufficient data on the test asset’s performance. 

Returning to principle #3 + turning off ads prematurely: by turning off that ad, the advertiser limited their upside because as the ad platform learned more and observed similar ads performing, it did not have the option to deploy this creative in this advertiser’s account. We know that if the NEV[ad, impression, placement, user] was negative, this account’s ad will not serve. The only variable altered by not pausing ads is the number of combinations on which the platform can run a NEV calculation. 

More ads = more opportunities to maximize [eCTR * eCVR] = more opportunities to serve

The best part of this is that when you embrace this way of thinking about ad creative, the “the algorithm hates my ad” bullshit goes away – because an ad not serving in a constrained account tells you one of three things is true: 

  1. The constraint(s) is(are) too restrictive
  2. The audience is unlikely to interact with (eCTR) or convert from (eCVR) the ad
  3. There’s is another ad in the account higher risk-adjusted expected performance

The algorithm doesn’t hate your ad – at best, it doesn’t understand it and your audience doesn’t love it enough to overcome that lack of understanding. But, more than likely, your audience just doesn’t like (or remember) your ad. Don’t blame the ad platform. 

This isn’t a crazy conspiracy theory about how platforms work; it’s cold, hard, unfeeling math. 

What this doesn’t address is the kinds of ads you should make, or how you should organize them. Every ad you (or your partner/agency/freelancer) make(s) has a cost: time, money, energy. 

Instead of obsessing about volume, focus on making ads that last, that adhere to the creative principles below AND are sufficiently distinct such that machine learning can detect differences in the underlying creative pattern and adjust accordingly. Making a bunch of near-identical ads has a very real cost and very little upside, which is why we have a 10% or 10x testing philosophy, along with an emphasis on taking bigger swings. 

Aside from that, your ads should follow basic creative + structural principles: 

  • Every group of ads (ad set, ad group) must have a clear offer, angle + theme. If you’re just duplicating ads around your account, hither and yon, that’s stupid and you should stop. 
  • Focus on experimentation around specific components of ads (hook, proof point, statistics, etc.) vs. changing a background color. 
  • Where possible, emphasize clarity over cleverness. If someone unfamiliar with your brand/offering can’t watch your video for 5s and have a good idea what it’s about, bad.  
  • Ads must align with how your target audience communicates – not how you want them to communicate. Too many brands don’t do customer insights research or spend the time reviewing the content your target audience is ALREADY consuming. 
  • Align your ads with what you can reasonably know about your audience – for instance, if you have a “remarketing” ad set, then add urgency or social proof or content that acknowledges your audience may have some familiarity with your brand. The opposite is true for prospecting: if they’ve never seen your brand before, don’t make the assumption that they know who you are (or care). 

This is a basic list – and one that I’ll expand on in a coming newsletter. This one has already been quite a bit longer than I originally intended, but I hope you find it valuable. 

Until next time,

Sam

Related Insights