Skip to Content
Article

My Framework For Google & Meta Account Audits

by Sam Tomlinson
February 5, 2024

Over the past few weeks, I’ve gotten nearly a dozen questions regarding paid media audits – why they’re important, how to do them well, and (most importantly), how to conduct them (or how to know if the person/firm you’ve hired is the right one). 

Let me start by saying this: audits can be one of the most valuable things you do for your paid media program. The unfortunate reality is that 90%+ of audits done are not – they’re low-quality, thinly-veiled sales pitches intended to get your business instead of helping you improve your performance. 

That’s not the proper role of an audit. Every audit should start from the core principle of adding genuine value for the client. That’s it.

Why Audits Shouldn’t Be Free:

If you assume that the proper role of an audit is to create substantial value for the brand commissioning it, then it stands to reason that it shouldn’t be free. My thinking on this has evolved substantially over the years – but I’ve come to this position as a result of three things: 

  1. I want to ensure that the incentives (real or perceived) for every engagement point to the client’s best interests. Full stop. There should never be a flicker of doubt in the client’s mind that the reason I’m making a recommendation is because I believe that it is in their best interests. This is how I approach referral fees (don’t take them), and maintaining that consistency is paramount. 
  2. Real, high-quality are a ton of work. As an agency, we’d often spend 75+ hours putting them together – we’re talking 50, 70, 100+ page decks that are meticulously detailed, comprehensive and actionable. That’s a lot of resources to expend if you’re not getting paid, regardless of the outcome (won/lost client). Going a step further, it genuinely sucks when the brand takes that and hands it over to their current partner. It’s demoralizing. It’s wrong. And it’s avoidable.
  3. A well-done audit pays for itself many, many times over. It’s an investment in the future of your brand. And as such, it’s critical that everyone involved (agency, brand) has skin in the game. When the brand is paying for the work product, they’re invested in hearing and acting on the findings. When the agency/freelancer is being paid for their work, they’re invested in delivering something worth listening to. 

That third bullet is usually the one that brands balk at, so let me illustrate with a concrete example: if you’re spending $50k/month across Google & Meta, and a $5,000 – $10,000 audit improves account efficiency or reduces waste by just 10%, it’s paid for itself in a max of two months. By the end of the year, you’ve made 5x to 10x that initial investment. 

I’ll also tell you that I’ve done audits where I’ve found 50%+ wasted spend, reduced CPAs by 80%+ and/or improved POAS (profit on ad spend) by a factor of 3 or 4. There’s arguably no better place to put your dollars. If it turns out that your account(s) is (are) excellently managed, I’ll be the first to tell you. Yes, you’ll pay for it – peace of mind has a price, after all. But, if your account has material issues impacting its performance, you’re going to pay that $5,000 or $10,000 either way. The only differences are: (1) who are you paying (someone to do the audit or the ad platforms) and (2) how many times are you paying it (once for the audit, or every month as you light your ad budget on fire)? The math is simple and compelling. 

One More Thing Before We Dive In: 

Before we dive in, let me also state my other core belief: there are certain setups, principles and approaches that I believe are objectively more likely to produce a desired result than others. You will see evidence of that reflected throughout the framework below. That does not mean that an account using other processes is destined to fail (it is not). It does not mean that there aren’t brand-specific considerations that warrant a deviation from those principles in order to achieve a specific end. There are. It just means that, all things being equal, I think there are certain strategies and tactics that are more likely to result in positive outcomes than others. 

Objectives – Every audit should start with a detailed conversation about your organization & objectives. It is impossible to conduct a proper audit if you don’t understand the nuances, priorities and economics of the underlying organization. In contrast, any audit that starts with a variant of, “Just give us access and let us go,” is likely to result in a suboptimal product. 

Ideally, you want a partner who is asking pointed questions, such as: 

  • What are the primary and secondary objectives of this account?
  • What is your ideal customer profile (ICP) / primary target audience? 
  • What are your baseline qualification factors? 
  • What is your current tech + data stack? How often is it updated? 
  • How do most of your customers hear about you (if applicable)? 
  • Who are your primary competitors and alternatives? 
  • How is your organization different? What are your UVPs/USPs?
  • What are acceptable efficiency targets for each service/SKU/product line?
  • Who are your most + least valuable customers? Why? 
  • Can you share your customer LTV curve by product/service?  
  • How does your sales/new customer intake process work (if applicable)?
  • Do you have any existing concerns about the account? 
  • Is there an existing ad + landing page repository? 
  • What are your current budgets? Are these fixed or flexible? If so, how flexible? 

This is obviously not an exhaustive list, but it gives the individual conducting the audit an initial frame of reference on which to ground the audit so it is maximally relevant to the organization.

Research & Insights:

I genuinely don’t know how someone (or some company) can conduct an ad account audit without doing a significant amount of industry/sector and geographic research. Ad markets are complex, they’re hyper-local and they’re wildly diverse. If you don’t have a clear perspective on the nuances of an audit client’s underlying business and broader market, you’re going to miss something or you’re going to write something that sounds smart, but is spectacularly stupid.

A few research ideas to get you started: 

  • Business Data – One of the easiest places to start is the audit client’s financial + accounting teams – most have detailed records and can give you data not just on sales, but on contribution dollars per transaction, profitability based on the customer’s state, zip code and/or region,  lead qualification rates, lead-to-close rates, pipeline value, etc. Those data points are invaluable when it comes to analyzing the ad accounts themselves, as you’ll be able to assess whether or not this information is being used to structure + manage the campaigns. 
  • Keyword + Audience Data – There’s nothing more frustrating than reading an audit that has clearly been written by someone who has not taken the time to understand the space or conduct their own KW research. It’s infuriating. As a specific example, I reviewed an audit for a law firm which stated that $120 CPCs on injury + legal terms (i.e. “Car Accident Injury Lawyer”) were too high, and instead, the firm should focus on generic terms with lower CPCs (i.e. “Car Accident”), which had CPCs in the $10 – $20 range. Here’s the rub: anyone who has done their homework on the legal space knows that most generic terms aren’t looking for a lawyer; they’re looking for information, news and/or images. They’d also know that $120 per click for injury + legal terms in one of the US’s 10 largest metros is a bargain at twice the price – especially when those clicks convert to retained cases at a 8.5% rate AND have an average expected fee return of $25,000 (based on the business data above). So, for $1,412, this firm was getting fee returns of $25,000+ …. And the recommendation was to stop doing it because the CPC looked high. The same is true of audiences – if you haven’t done your audience research, you’re likely going to be confused by non-intuitive interest stack targeting – whether that’s targeting premium credit card holders for luxury resorts (there’s a shockingly high correlation between people who spend boku bucks on travel + people who have American Express Platinum / Chase Sapphire Reserve cards), or personal injury firms advertising to people in-market for BOTH a new/used vehicle AND a car seat, or impulse driven eCommerce brands directing early-year advertising to people interested in Dave Ramsey, Suze Orman, Financial Samurai and the like. On the surface, it seems mad – but if you actually do your homework, you can see the logic + why it makes all the sense in the world.
  • Customer Feedback / Client Feedback – I’m continually shocked at how often customer feedback, reviews, ratings and other third-party validation/credibility is overlooked in ad account audits. This often manifests itself in the form of actual customer testimonials being ignored in ad and landing page copy. If your actual customers are telling you, “We went with your company for X and Y reasons,” or, “Your brand was INCREDIBLE at Z,” – that’s GOLD. From an audit perspective, understanding how the audience perceives the underlying brand is an invaluable data point when assessing creative (more on that later). The same is true for credibility/trust factors: if you’re a challenger or upstart brand, every customer/prospect subconsciously risk-adjusts your offering. Why? Because buying your widget or going with your company poses a risk relative to going with the known, established and trusted brand. Reviews, Ratings & Third-Party Validation (awards, press, etc.) reduces the perceived risk. If a finding from your audit is that conversion rates are lower than you’d expect, and you find that trust factors aren’t prominently featured in ads and/or landers, you can make an informed observation. 
  • Competitor Landscape – No brand advertises in a vacuum. Before I ever dive into an account, I spend a significant amount of time reviewing the competitive landscape – who the competitors are, how they message, where they advertise. If you’re curious about that process, I wrote about it in the “Keeping Up with the Joneses” issue a few months back. 
  • Seasonality, Sales & Product Drops – There’s no easier way to manipulate an audit than to use incorrect or invalid comparisons – for instance, an eCommer audit that compares Q1 to Q4, or a lawncare business that compares Q3 to Q2. While these are fairly obvious, there are also more subtle ones: if a brand runs a bi-annual sale, comparing the sale month to the non-sale month is likely to make the non-sale month look artificially poor. Asking these questions up-front, and understanding what happened in the organization’s broader marketing calendar, is imperative for conducting a proper, fair and useful audit. 

Data Flows:

The simple reality of Meta & Google advertising today is that data is the primary optimization lever. If an account is not configured to get the right data into the ad platform at the right time, you’re unlikely to get a consistently excellent result.

Things to check for here include: 

  • Validate Conversion Tracking & Tag Management – this is a baseline task included here because of the staggering number of times I see it ignored. At a bare minimum, you want to ensure: (1) duplicate + irrelevant conversions aren’t included; (2) all forms/carts work as intended and are properly counted; (3) there are no blockers that prevent conversion tracking and (4) if values are passed, the values passed are correct. The single-biggest issue I see are duplicate or irrelevant conversions – for instance, if you’re optimizing based on qualified leads, “YouTube Subscribers” should not be counted as a “primary” conversion in the account. 
  • Are Enhanced Conversions (EC) configured (Google)? – this is a mechanism that securely passes incremental data about conversions (such as name, email address and phone) to ad platforms, enabling better matching of individual-to-converter. The end result of this is higher fidelity data being shared with platforms, which (in turn) results in higher performance from smart bidding. 
  • Are Offline Conversions (Google) / Conversions API (Meta) in Place? – Google Offline Conversions (GOC) + Conversions API (CAPI) operate in a fundamentally different way from enhanced conversions; where enhanced conversions supplement on-site tags to ensure higher match rates, offline conversions pass incremental, post-conversion data to the platform based on a customer/lead’s subsequent interactions with your business. For instance, Enhanced Conversions would be used to ensure that all leads generated are properly captured; Offline Conversions is used to ensure that only qualified leads are counted as “conversions” in Google Ads. This is a particularly massive opportunity for most accounts, as (according to Optmyzr), less than 13% of accounts linked to their platform are using Offline Conversions and CAPI. 
  • Feeds & Other Data Sources – While conversion data is insanely valuable, it’s hardly the only source of data being used by the account. It’s also important to ensure that your primary + secondary feeds are in excellent shape (something that’s often neglected), your other services (YouTube, Google Search Console, GA4) are properly linked, and that any other third-party data source deployed in an ad account (such as a reporting dashboard, CRM, etc.) is properly connected. 
  • Data Upkeep & Management – The three bullets above focus on the data infrastructure that powers your account – but this is only half the equation. All of the infrastructure isn’t going to do you much good (and may, in fact, do a lot of damage) if it is being used to distribute contaminated data. This is where CRM/data management comes into play (something often ignored in audits) – how often is the data updated in the CRM? Is it changed frequently (I’ve seen sales reps mark every lead as “qualified”, only to go back and change them after trying to get in contact so their numbers look better) – and if so, is there a rule/process to ensure that the updated information is passed back to platforms (usually not). Reviewing CRM audit logs, speaking directly to Business Development Representatives (BDR), cross-referencing conversions in platform vs. fulfilled orders, are all viable strategies to ensure that the data you’re passing to your ad platforms is accurate. In cases where you spot issues (such as BDRs who don’t like to update data in a timely manner), ensure you’re including this in your recommendations. 
  • Auditing Either Meta or Google in a Silo – The final, major mistake I see in audits is reviewing data flows in isolation, vs. holistically. Especially when it comes to data passback, seemingly minor discrepancies make a great deal of difference; auditing both Meta + Google together maximizes your chances of catching any deviation – whether it’s an incorrectly-formatted value being sent back to one, or an odd delay in a passback timer, or the wrong address being sent.  

Ultimately, the data infrastructure is going to play a disproportionate role in the overall success of the account. If these things aren’t in-place, the lion’s share of your focus should be on getting these in place. The simple, oft-unshared reality is that exceptional data solves ~50% of the issues in most accounts over a reasonable period of time. 

Structure:

Account structure is how marketers communicate their priorities to ad platforms; if the structure isn’t aligned to the organization’s goals + priorities, the account is likely to perform suboptimally (even if everything else is done quite well). At a bare minimum, I want to see the following: 

  • High Floor / High Ceiling Setup – Every account – Google, Meta, Microsoft, whatever – should be set up using a “High Floor / High Ceiling” philosophy. Essentially, this involves putting reasonable, business-relevant controls and automations in place to mitigate blow ups with minimal cuts to the “upside” potential of the account. I want to see a structure that is scalable (upside) with downside protection (cost caps, bid caps, tCPA/tROAS, strong negatives, use of rules or conditional logic/scripts, stop-loss rules, etc.) to ensure the account performance doesn’t tank when something breaks on the platform side. While this may only happen 1x per year, the losses from that single day can be 10%, 15%, 20% or more of your entire profit for the year. Downside protection is a must. 
  • Cross-Platform Consistency – Whenever you’re auditing a paid media channel, it’s essential to view it through the lens of both the platform AND the other platforms it is supporting. If, for instance, your Meta campaign is set up using a SKU + Audience matrix, but your Google Search campaign is configured using a “Use Case” setup, properly assessing each will be difficult. Continuity + consistency are consistently underrated in audits, but play an outsized role in performance. 
  • Segmentation between brand + non-brand in Google Search – the incrementality of brand vs. non-brand is often materially different, as are your new customer acquisition costs). For most organizations, it makes sense to bifurcate these two search segments into separate campaigns, then exclude all branded terms from the non-branded campaign – not only does this ensure that brand doesn’t cannibalize non-brand, it also gives you a truer read on incrementality + lift. 
  • Proper Use of Smart Bidding Strategies – For Google, this (typically) means Max Conversion Value w/ Target ROAS (tROAS) or Max Conversion Volume with a Target CPA (tCPA) bidding strategy – either as a standalone strategy OR as a portfolio strategy (context-dependent). If a Portfolio Strategy is used, are correct/defensible Max CPCs set? If not, why not? On Meta, I look for use of Cost Caps and/or Bid Caps where feasible; this maximizes the probability that budgets will be deployed only if the expected return (as calculated by Google/Meta) is sufficiently high.  My overriding concern when it comes to bidding strategies is this: bid for what you want. If you want sales at or below a cost of $X per sale, great, there’s a strategy for that. If you want to sell as much as possible at or above a specific target return, great, there’s a strategy for that. Accounts go off-the-rails when marketers try to get clever – whether that’s by bidding for clicks or engagements in conversion-focused campaigns, or trying to out-trade machines using manual bidding. 
  • Consolidated Non-Branded/Shopping Structure – Generally speaking, the more you split data across different campaigns, the longer it will take Google & Meta to learn. I am a strong advocate for a consolidated, matrix structure to the extent possible. This typically looks like this: 

The objective of this structure is to ensure that budget is funneled into the brand’s highest-priority ad groups (Google) / ad sets (Meta) when all things are equal, but still permitted to go to lower-priority ad groups / ad sets if there’s a sufficiently high expected return. 

There are many ways to achieve this end, and I will not knock another provider for achieving the same outcome with a different setup. However, what I will (and have) criticized is a structure that is either (a) completely flat (this tells Google/Meta that nothing is important); (b) misaligned to the ultimate objectives of the organization and/or (c) inconsistent in a way that results in materials negative outcomes for the organization.

  • Segment When The Marginal Return Justifies It – There’s a pervasive misunderstanding among paid media buyers that segmentation is free. Nothing could be further from the truth. Whether you’re on Meta or Google, if you are segmenting (by audience, by offer, by creative, etc.), there needs to be a compelling reason to do so, and that compelling reason should center around either (a) a sufficiently higher conversion rate // lower CPA // higher ROAS; (b) creative having a material impact on performance and/or (c) an internal reason that is quite compelling (such as different business units, radically different customer values, etc.). There’s nothing more frustrating than seeing multiple ad groups / ad sets targeting the same people with the same creative, simply because someone decided they wanted to “test the audience” or “segment it out.” Bottom line: If you’re not going to do something fundamentally different (offer, creative, product, value) with your segments, don’t segment. This is true in Google, it’s true in Meta, and it’s true in Microsoft/LinkedIn. 
  • Integration of Audiences (Meta + Google) – Audiences are a wonderfully powerful tool in both Meta & Google. There’s no denying that. There are multiple ways to leverage those audiences – whether it’s exclusions (for instance, current customers, bad leads, etc.), re-prioritization (possible in Google Ads), or outright targeting (available in both Google Ads & Meta Ads). As I’m reviewing an account, I want to understand if and how audiences are being used, along with which ones are being used. 

I’m personally partial to Lookalike Audiences – especially tightly focused, homogenous LALs. I still believe they are one of the best ways to point Google (for Demand Gen) and Meta (for any campaign) in the right direction based on your business/organization. That doesn’t mean interest stacks can’t work (they can – especially if you use Combined Segments in Google and/or well-researched interest stacks on Meta).

On Meta specifically, I’m not averse to broad targeting (especially if you have well-configured, tight data passback. Ultimately, it comes down to this: do you have a cohesive strategy to integrate audience-based targeting into the account in a way that adds value to the client/organization? If not, that’s a problem. If you do, then it becomes a question of if that strategy is optimal. 

  • Proper Constraints on PMAX (Google) – This can (and will) be an entire newsletter issue later this year (spoiler alert!), but from a top level, here’s what I’m looking for:
    • Thematic Asset Relevance – I want to see multiple assets that are all thematically related // offer related; throwing 10 different offers into a single asset group is likely to result in bad things. 
    • Use of Audience Signals – Audience Signals are different from targeting; they’re closer to “guidelines” to steer the machine toward your desired audience. 
    • Proper Brand + Competitor Exclusions – there’s nothing more annoying than watching your branded or competitor traffic be cannibalized by PMAX. Brand + customer exclusion lists are essential. 
    • Placements – I tend to find that parked domains, adult content sites, sensitive topics sites, low-quality apps and live streams have exceedingly low value for my clients, even if Google disagrees. As a result, I want to see some exclusions used (or a theory on why not). 
    • Tracking Template Use – I want to see some form of tracking template or UTM parameters (preferred: tracking template), which will allow for easier on-site analysis of PMAX traffic. 
    • Actual Traffic Quality (GA4) – One of the best ways to see how your PMAX campaign is performing is to (gasp) review the actual on-site traffic activity in GA4 via the acquisition, behavior, events and conversion paths reports. Proper tracking templates allow for deeper dives into that traffic, which (in turn) yields actual insights on what’s working + what’s not. 
    • Attribution Model – in general, data driven attribution is better than last click, and view-based attribution is suboptimal for many brands using PMAX. That all being said, your mileage may vary. I’m not going to criticize someone for disagreeing with this, if they have a good reason for doing so. 
    • New vs. Returning Customers – This is one of the biggest issues I see with PMAX: no-one is quite sure how we’re using it. If there is not clarity around the goal, then there won’t be clarity around the performance. If the goal is to acquire new customers, then existing customers should be excluded; if the goal is to get both new + returning customers, then you’ll need a structure that maximizes both of those (likely multiple asset groups or campaigns), vs. a sad middle ground. 
  • For Shopping: Automated SKU Promotion/Demotion – Finally, there’s nothing more frustrating than reviewing a shopping campaign, only to find that products with different stock levels, COGS, priority and/or seasonality are all serving together in the same product set. Most of this can be addressed using a Group of Individual Products (GRIP) structure, or a set of automated rules that adjust product groupings / targets based on specific factors. 

Especially for brands with larger catalogs, it’s exceedingly easy to set incorrect targets for specific SKUs, which results in targets that are too high for some SKUs (thereby losing the brand $) and too low for other SKUs (thereby selling products at a break-even or negative contribution margin): 

H/T: I love this visualization, which was first shared with me by Inderpaul Rai a few years ago in London. I’ve used it since. It’s amazing. 

Targeting:

  • Ongoing Targeting Evolution (Keyword, Interest, Audience) – Ad Accounts are living, breathing things. They must evolve or they die. That evolution comes in many forms – creative, placements, strategy, targets, budgets and targeting. One of the hallmarks of the digital revolution is that behavior change happens at TikTok speed; if you don’t have a mechanism in place to adapt how your account identifies and connects with your audience at a similar speed, you’ll find you’re leaving opportunities on the table. Even for larger, established and legacy brands, you’ll find that how your audience searches, what concerns + pain points they have, what questions they ask, what alternatives/competitors they consider changes dramatically every 3-6 months. 

In presentations, I refer to this as fighting against gravity: 

It’s brutal. But the number of accounts I see where no keyword (Google) or audience targeting (Meta) changes have been made in months or years is astounding. It’s gobsmacking. One of the core functions of a professional media buyer is the ability to defy this gravity via a structured, iterative process that looks something like this: 

That’s what I’m looking for when I review an account: a clear indication (preferably one that’s well-noted) of how this individual/agency is not just maintaining the status quo, but actively defying the gravity that comes for every account. It can (and will) look different in Meta than it does in Google. But the overarching principle must be demonstrated. 

  • Proper Keyword Management (Google) – As with a few other topics from this newsletter, keyword management could be its own issue (and it might be in the future!). For now, I’ve included the overarching principles and checks I start with for each account below:
    • Using Exact Match (EM) and Broad Match (BM) – The data here is fairly clear: Phrase Match under-performs both EM and BM in most accounts. This was validated by a 2023 Optmyzr Study, which found that EM tended to out-perform both BM and PM; it also found that BM with proper data passback out-performed PM. This is by no means definitive, but the findings do align with what I’ve observed in hundreds of ad accounts: Phrase Match has become substantially broader than it once was, and does not benefit from the behavioral data that broad match does. The end result is a match type that is too broad and too dumb to use as the centerpiece of a campaign strategy. 
    • Single Topic Ad Groups (STAGs) – Each ad group should be focused around a specific topic, with a suitable number of KWs included (anywhere from ~5 to ~50 tends to work). Single Keyword Ad Groups (SKAGs) – once a staple of a SEM structure – are not viable based on changes to match types + negative KWs. All KWs in the ad group should be at the (approximate) same level of intent/value – don’t comingle low-intent, low-value terms (like basic questions or “checklists”) with high-value, high-intent terms.
    • Conflicting Negatives – negative KWs are one of the most effective targeting mechanisms in Google Ads, but (like anything) they can be used improperly. If you have situations where a negative KW is blocking a targeted KW from serving, that’s an issue that should be remediated ASAP – either by removing the targeted KW OR modifying the negative to remove the conflict. 
    • Proper Filtering + Funneling – a telltale sign that a search account/campaign has not been structured properly is the same query triggering multiple ad groups or campaigns; this should be minimized. Not only does it result in suboptimal messaging for the end user (since two very different ads should trigger if you’re following my segmentation principle), it makes the manager’s life exponentially more difficult because learnings + data are now fractured across multiple ad groups or campaigns.
    • No KW Serving Errors – if all (or most) of your KWs in the account are limited by volume, or not serving due to a policy violation or other error, that’s a red flag that there could be a broader management issue at play. 
    • DSAs are Kept In Check – DSAs can be incredibly valuable for surfacing new terms and defying gravity within your account; however, that only works IF (1) the DSA is given strict parameters to ignore targeted KWs and (2) there’s a process in place to promote, observe or exclude terms surfaced by the DSA. Without this, DSAs run haywire, siphoning valuable searches away from where they should go, polluting your campaign data and obfuscating what’s actually happening in the account. 

Creative:

We’ve all heard that creative can make-or-break an account’s performance – and that’s for good reason: creative is one of the easiest ways to distinguish your brand from the slew of competitors and alternatives all vying for your target audience’s attention. But, for as powerful as creative is, it isn’t the only lever, and it certainly shouldn’t be relied on as the only targeting option used. As I review an account, here’s what I look for: 

  • Exceptional Ad Creative – creative is (arguably) one of the most important targeting levers in an account, but it’s also one of the most overlooked. Here’s what I want to see when auditing an account:
    • Relevant, Focused Creative – the creative should be immediately relevant to the targeting + lander. 
    • Bold, Distinctive Copy: I’m a sucker for pithy, punchy copy – because I’ve seen it work time and time again. There is so much sameness in Google + Meta Ads; I want to see ad copy that stands out. I want to see something remarkable and distinctive – or evidence that using it was not effective. 
    • Different Creative Types/Content – for Google, this means creating unique ads (or, at the very least, unique ad structures). There’s no point in running two copies of the same ad in an ad group – all you’re doing is depriving yourself of the insights + learnings that come from running legitimate tests. For Meta, that means creative diversity and different ad formats and structures – such as the ones I discussed here. 
    • Use of Labels – I’m a huge proponent of labeling ads as “low performers” when they don’t work, and those that do “high performers” – not only does it make it easier to manage the account, but it also provides an easy-to-access historical record for when the question inevitably comes up. 
    • Strategic Use of Copy Variations – especially for Meta, I tend to be averse to including multiple, radically-different variations of the Primary text or the Headline, simply because the data on those is limited. If my objective is to learn what primary text resonates, you’d (arguably) be better off just creating three ads.  
  • Use of Ad Extensions – I LOVE ad extensions – not just because they increase the real estate I get in the SERP, but also because they provide alternative capture + interest points I can use in the future. For instance, if many people searching for a service end up clicking my “Work For Us / We’re Hiring” sitelink, that’s a clear indicator that either (a) I have a KW targeting problem or (b) something else is amiss because I’m getting job seekers instead of potential leads. In addition to a minimum of six (6) sitelinks, I also want to see Callouts & Structured Snippets used (if possible), along with locations and images.
  • Reasonable Constraints on Auto-Generated Creative – Both Google & Meta are pushing LLM-generated content to ever-greater levels; this isn’t necessarily bad, but it’s certainly not always a good thing. As I’m reviewing an account, I spot-check creative and see which (if any) auto-generated assets are used. While there’s nothing inherently wrong with auto-generated assets, my overwhelming experience has been that they are mediocre, often-generic and sometimes quite problematic – so I tend to recommend brands turn them off. For Meta, I’ve found that many of their suggestions are emoji-heavy (can be problematic for certain audiences) and sometimes misleading (for instance, making claims that aren’t substantiated or are exaggerated). Again, none of this means that these are inherently bad, but an account that is over-reliant on LLM-generated content is likely underperforming. 
  • Use of Ad Customizers – Ad Customizers are one of the most powerful and most under-utilized tools available to Google Advertisers. They enable you to turn anything – from a number of products in stock to a number of 5* reviews to the number of years you’ve been in business – into a variable that can be updated from a single screen, without editing the ad. For larger accounts, these can be game-changers for management – maximizing relevance for both the brand + advertiser. 
  • Advertiser Verification & Proper Multi-Advertiser Ads – At this point, this should go without saying, but an account that hasn’t completed Advertiser Verification is missing out on a Logo + Brand name in SERPs (for Google). On Meta, if you’re using Multi-Advertiser Ads and/or Partnership Ads (whitelisting can be incredible), you just want to ensure that it’s done properly. 
  • Ad Quality Indicators – I’m asked about this one quite frequently, so I wanted to include it here: I look for patterns in ad quality indicators on both Google + Meta. If I see that every ad in an ad group is “below average” relevance + “below average” expected CTR, that’s a clear indicator to me that the copy might be generic or flat-out wrong and should be investigated further. The same is true for landing page experience – it could just be that I have a bad lander (more on that below). 

Finally – and perhaps most importantly, I do like to look at Quality Score, but not in the way most people do. My first inclination is to see if there is a multi-modal distribution of QS within an ad group (for instance, are there 10 terms with a QS of 3-4 and 20 with a QS of 7-8 in the same ad group? If so, then this is a prime candidate to fracture into 2 ad groups). Essentially, this is Google telling me that some of these terms just aren’t as relevant as others. 

The same thing holds true in Meta – if you see consistently low relevance scores, that’s a good indicator that your ad/offer/angle might not be relevant to the targeted audience; if the expected conversion ranking is “below average”, I may have a lander or creative/lander alignment issue. In each case, I’m not solving for the ad quality factor; I’m using the ad quality factor to understand how the platform is evaluating the account/creative/lander, then using that plus my own assessment to conduct a root cause analysis. 

Experience:

From a big-picture perspective, any audit worth the paper or pixels on which it is printed will look beyond the ad account. There’s only so much you can optimize in the account, and I think most audits fall short because they are unwilling (for whatever reason) to look beyond the ad account. Here are three things I always include, because I think it’s just that important: 

  • SERP Evaluation – any Google Search or Shopping audit that doesn’t include a SERP evaluation is woefully deficient. Put simply, there’s only so much in-platform testing and analysis will tell you; to get a comprehensive understanding of why your account is performing (or not), you need to put yourself into the shoes of your audience. That means checking SERPs yourself. That means seeing exactly how ads appear, what other ads are present, what SERP features may be distracting or consuming click share, etc. If you’re not looking at the entire SERP – paid results, features, organic results, local results – you’re going to miss something. 
  • Ad/Offer/Lander Alignment – Your ad is just a conduit – something that connects a person (either on Meta or Google) to your brand (represented by your website) for a particular purpose. The number of times I’ve seen otherwise-excellent accounts underperform due to a fundamental mismatch between what’s presented in the ad (“get a demo” or “shop now”) to what’s actually possible on the website (“submit a lead form!”) is shocking. If there is not continuity and consistency between the ad content, the offer, the lander and the brand, the probability of things going sideways increases exponentially. My preferred way to identify and solve this is simple: screenshot + click on an ad. Then actually read the lander, as if you’re a potential customer. If something isn’t the same (for instance, the ad says 20% off but the landing page says “Buy One Get One Free”), flag it. Be thorough, diligent and fair – but remember that customers today are savvy and persnickety  – small discrepancies can result in significant performance impacts. 
  • Lander Assessment – In addition to consistency, I also want to assess lander diversity and formats. If you’re only using one type of lander (for instance, a PDP or Service Page), that’s an issue wholly unconnected to the ad account, but one which could have material impacts on the ad account. Likewise, if the landers are slow-loading, poorly-formatted, ugly or just bad, that will have an impact on account performance.

Management:

As a result of automation, Google & Meta management is both more complex and less time intensive today than it was five years ago – and the unfortunate reality is that most paid media managers have not adapted to this changing world order. 

  • Batched Changes + Learning Phase Minimization – there’s no easier way to tank your ad account performance than to try to out-trade a machine or consistently “tweak” campaigns every day – doing so is counterproductive. It essentially forces Google/Meta to try to hit a new target every day, which it simply is not equipped to do. My philosophy – and what I want to see – is the following:
    • Batched Changes – most changes can be made 1x or 2x per week, with exceptions for things like adding irrelevant negative keywords (more frequently to start, less frequently once campaigns serve). 
    • Evidence of Eyes-On, Hands-Off Management – batching is great for most things, but there are situations that demand immediate attention (broken links, wrong targets, etc.) – for these cases, I want to see immediate changes. 
    • Limited Ad Changes – Most ad changes can (and should) be handled via customizers to avoid resetting learning. 
  • Change Log + Notation System – Google ads has a robust notation system that is massively under-utilized. There’s nothing better than being able to review a detailed notation history to understand how, when and why various changes were made. 
  • Active Search Terms & Placement Report Management – This is a pet peeve of mine, but I want to see search term management. There’s nothing more frustrating than seeing a known, low-quality term rack up impressions because Google thinks it found something when the manager (or anyone with 6 functioning brain cells) knows that it’s on the wrong track. The same is true for the placement report – if you’re allowing tiny ads to show up on low-quality apps, there’s likely something wrong. 
  • Updated Exclusions/Inclusions – Exclusions are critical. There’s no two ways about it. For Google, that means negative geos, it means core negative KWs, it means placement exclusions; for Meta, that can be audience exclusions, certain placement exclusions, etc. When I see a campaign with 5-10 negative KWs in total (yes, I’ve seen them), that is a strong signal to me that the account has not been thoughtfully managed, and there’s likely wasted spend + missed opportunity. 
  • Use of Proper Short-Term Adjusters – While this isn’t mandatory, I do like to see brands use short-term adjusters (such as seasonality adjustments) correctly. There are some brands where it is never necessary, and that’s fine. But when I see a brand that runs short-term, major sales 2x to 3x per year with no seasonality adjustment, that’s a red flag – because what usually follows is a situation where the period immediately following the sale is massively inefficient. 

Evolution & Experimentation:

Finally, there’s experimentation. The data here is clear: fewer than 5% of the accounts I’ve audited have conducted more than 3 experiments in the 12 months preceding the audit. That’s insane to me. On Meta, the numbers are a little better, but not by much – and Meta’s A/B testing functionality is arguably better than Google’s in terms of its simplicity and intuitiveness. 

Again, there’s no magic number of experiments. There’s no magic number of creative tests. But there is magic in doing them regularly. There is magic in having a management process that facilitates iterative, structured evolution.

As with everything in the digital space, the devil is in the details – which is why I spent an inordinate amount of time understanding the client’s business, goals and priorities before diving in the account. While there are objectively better ways to do things when all other things are equal, the reality is that most times, all things aren’t equal. The reason you hire a paid media manager is to meld best practice and your organizational priorities together into a cohesive, high-performing ad account. 

Finally, my parting thoughts on agencies + audits: 

A well-done audit should be something that any agency welcomes. An audit is a second set of eyes and a fresh perspective that can help your client get the most out of their paid media dollars. That’s objectively a good thing. Unfortunately, a lot of scammy, shoddy, hair-brained agencies have corrupted what should be a valuable exercise that helps both your team and your client’s ad account, and turned it into a transparent scheme to steal clients by bashing other agencies, lying about what’s actually happening, or simply copy-pasting generic, inflammatory findings instead of taking the time to understand the underlying business and tailor findings to the brand. 

The end result is an adversarial relationship between agencies + audits – with everyone losing. It doesn’t have to be this way. It shouldn’t be this way. I hope this framework helps as you think about how to audit your account, and gives you questions to ask of your partner/agency. 

After all, the only people afraid of tough questions are the ones that don’t have good answers. 

Until next time,

Sam

Related Insights