The What, Why & How of Self-Scouting
It’s difficult to believe, but over 25% of 2024 is over – the books on Q1 are closed and we’re all careening toward summer at a fast and furious pace (well, at least those of us in the northern hemisphere).
Every year around this time, I spend a majority of my time self-scouting. It’s a term most often (but not exclusively) associated with high-level athletics / professional sports, where players and coaches alike spend substantial time and resources breaking down their approach to the game, their strategies, their tendencies in specific situations, their strengths and weaknesses in each area.
The rationale for making that investment is simple: your competition is already making significant investments to get those answers via their competitive scouting (a framework for which I shared in Keeping Up with the Joneses), so you might as well identify the issues/challenges/problems first.
While self-scouting has its roots in sport, it has a place in every aspect of our lives – and especially in our marketing.
For today’s issue, I want to share the principles of self-scouting, alongside tactical examples of how I apply each one to specific clients, strategies and even ad accounts. Let’s dive in:
7 Principles for Successful Self-Scouting
1. The Ego Check:
Let’s face it: we all have egos. No one enjoys uncovering, examining and admitting their faults or failings – but that’s where growth occurs. There’s a famous story (originally published by NPR back in 2012) about how Intel executives Andy Grove and Gordon Moore, when mired in a losing price+performance war with Japanese memory chip manufacturers, turned to self-scouting. According to Grove, the conversation went something like this:
“…[Gordon Moore] and I were in his cubicle, sitting around, looking out the window, very sad. Then I asked Gordon a question: ‘What would happen if somebody took us over, got rid of us – what would the new guy do?” He responded, “Get out of the memory business.”
And I stared at him, numb, then said something to the effect of, “Why shouldn’t you and I walk out that door, turn around, and do it ourselves?”
Following that conversation, Moore & Grove went on to lay off more than 30% of the company’s workforce (~7,000 people at the time), close down multiple plants and shutter what had been Intel’s primary business – all in favor of transforming what had been a no-competition side business (microprocessors) into the company’s primary offering. Despite offers to license their microprocessor technology, Grove & Moore refused, vehemently of the belief that Intel should be the only manufacturer. The rest, as they say, is history: Intel today is one of the world’s leading manufacturers of microprocessors, and the company today is worth north of $170B.
And all of it was made possible by two people being willing to check their ego at the door, critically assess what they were doing, identify a flaw, and implement the change necessary to address that problem.
There is no higher expression of love for yourself and your team than being candid, honest and unbiased in your assessment of your mistakes and shortcomings. After all, it is only through clear, specific, and sincere feedback (whether that’s from yourself or from your team) that you can grow and evolve as a person and a professional.
2. Kick Rationalization To The Curb:
Self scouting is only effective if you’re willing to be brutally and unfailingly honest with yourself and your colleagues. Perhaps my greatest failing when first adopting this practice was rationalizing – I’d make excuses or justify why something was the way it was in an effort to make myself or my team feel better.
Rationalizing is like a siren song: it’s tempting. It’s alluring. It feels good. But, like Odysseus, you must resist it by any means necessary. Why? Rationalizing is cheating. Sure, it might feel good in the moment – but over the long term, the only thing it accomplishes is depriving yourself of the opportunity to improve.
The simple truth is this: what is done, is done. You can’t rewrite the past – you can only learn from it. Anything that inhibits that process – whether it’s rationalizing, sugar-coating, sweeping issues under the rug, whatever – must go.
3. Keep Your Eye On The Prize
Warren Buffett once said, “If you put a police car on anyone’s tail for 500 miles, they’re going to get a ticket.” The same is true when self-scouting (or auditing): flaws, misses and imperfections are everywhere; anyone can find a flaw with anything if they look long and hard enough.
But finding flaws for the sake of finding flaws isn’t what self-scouting is about; the objective is to find flaws that hinder your brand’s (or your client’s) ability to achieve their desired outcome.
When I conduct self-scouting, I analyze everything in the context of, “Does this impact or impede our ability to achieve or exceed the ultimate objective – and if so, to what extent?” Where the rubber meets the road here is with our prioritization structure:
- Red Light: Critical / Immediate Priority. Anything that has a material, adverse impact on our ability to achieve our ultimate goal is given this designation. In ad accounts, this might be significant wasted spend, poor targeting, low/no delivery, incorrect or broken conversion tracking, misconfigured or inaccurate shopping feed, broken lead forms or checkout, dominated offer, etc.
- Yellow Light: Warning / Moderate Priority. This categorization is assigned to any issue that is or has the reasonable probability of impacting performance, but does not rise to the level of a red light. Examples in ad accounts might be low-performing ad copy, missed creative angle, improper negative KW mapping, over-fragmented ad sets, and/or incomplete shopping feeds. These are all issues that need to be addressed, but only after the critical priorities have been resolved.
- Green Light: Opportunities + Minor Issues / Low Priority. Any findings that are deemed important, but below the above are categorized as “green” – these are areas of opportunity that are not negatively impacting current performance, but should be addressed at some future point. In terms of ad accounts, this might be opportunities for ad copy testing, new ad group buildouts, slight revisions to current audiences, new experiments (PMAX, A/B testing, Demand Gen), or excluding terms with impressions but no clicks from the search terms report.
- No Light: Anything that doesn’t fall into the above buckets is given a “no-light” designation. Things on this list are monitored for ongoing observations – after all, what might not be an issue or priority today could become one tomorrow. Having a running list of other things found saves time in the future and maximizes the value of each self-scout. Think of “no-light” like a parking lot for things you found, but aren’t sure what to do with.
Using this structure ensures two things: (1) that our scouting focuses on things of import, not of opinion or preference and (2) that our go-forward plan is guided by our north star (the “ultimate objective” from above).
4. Action Is The Imperative:
Reflection that does not precipitate change is daydreaming by another name.
That may sound harsh, but it’s true. At the end of the day, none of us are in the “daydreaming” business – quite the opposite, in fact. We’re in the business of getting (ever-better) results. We’re in the business of consistent, continuous improvement. That’s what our clients, bosses, shareholders and partners expect – after all, if you’re not getting better, you’re getting worse. The bar for remarkable always rises – so either we continue to clear it, or it knocks us out.
It’s surprisingly easy to self-scout, to be reflective, maybe even to make a list – only to disregard or deprioritize it in favor of dealing with the never-ending series of fires, emergencies (real or imagined) and other shiny objects life throws at us. Even the prioritization system I shared above is only useful if it is acted upon; labeling a bunch of things “red lights” and then doing nothing is a profound waste of time.
To avoid this trap, start your self-scouting where you stopped last time: look at the list of red light / yellow light / green light items – how many continue to be an issue? More importantly, what action(s) were taken to address each one? If none, why? If there were actions taken, but the issue persists, why?
5. Embrace The “5-Why” Mentality:
One of my favorite techniques for understanding what’s going on – and what’s going wrong – is the 5 Whys. This was originally developed by Sakichi Toyoda (the founder of Toyota), who used it to help workers understand the root causes of problems to help them avoid making the same mistake multiple times.
It works something like this:
- Start with a Problem Statement: The first step in solving the problem is understanding it well enough to state it simply and clearly (or, to quote Charles Kettering, “A problem clearly stated is a problem half-solved.”). To use a concrete example, I recently did this with an eCommerce brand that had sales declining substantially in Q1 vs. the prior year, despite consistent traffic + marketing investment.
- Ask “Why” The Problem Occurs: What’s the reason the problem above occurs / is occurring. To continue the above example, the declining sales were directly attributable to a 27% [not actual number for client confidentiality] decline in conversion rate.
- Repeat “Why” for the Previous Answer: Pretty simple – ask what directly precedes the issue/problem in the causal chain. Q: “Why has our conversion rate declined 27% vs. the same period last year?” A: “Our drop-off between adds to cart and successful checkout has dropped 50%”
- Rinse & Repeat this Process at least 3 More Times: Continue asking “why” the preceding answer occurs, until you’ve reached the root cause. In the above example, the root cause of the issue was that the checkout process was only working on certain device/browser combinations following an update, and the root cause of the problem was that the development team never bothered to QA their work, and no one else bothered to audit checkout. Oops.
What I adore about this methodology is that it works comprehensively and holistically – both for surfacing technical issues, as well as identifying the process breakdowns that enabled those technical issues to manifest in ways that negatively impacted the organization (i.e. lost sales).
6. Build Your Case On Data-Informed Insights:
American polymath W. Edwards Denning famously wrote, “In God we trust; all others bring data.” – and he was right.
One of the easiest ways to fail in self-scouting is to default to opinion, preference or taste. When doing this exercise with various people, I’ve heard everything from, “I don’t like the creative, so that must be the weak link in the funnel” or “the lander is ugly – that’s why people aren’t converting,” or “This account structure is so convoluted – it was bound to collapse.”
All of those are plausible stories derived from a desire to attribute a known problem (lower sales, fewer leads, etc.) to an initial set of observations (ad account structure, creative quality, landing page experience) – but all of them are, at best, hypotheses.
Self-scouting, when done well, is anchored in quantitative AND qualitative data.
That means digging into your analytics, ad accounts, user behavior, CRM, CDP, etc. to surface the relevant data points, then validate or reject the hypothesis above.
The same thing holds true for bigger business questions. As an example: a professional services provider saw a dramatic decline in pipeline during Q1. As part of a quarterly wrap-up / debrief, the leadership team held a self-scouting session – and surfaced a number of theories as to causes of this situation:
- Inbound lead volume was substantially lower
- Paid acquisition was not driving qualified leads at the same rate as before
- The BD lead was not doing enough direct outreach
- Macro conditions were negatively impacting buying decisions
Each of these was plausible – and if the team had stopped here, or assumed that the truth was one of these four and acted to address all of them, it would have made a colossal mistake. It was only after pulling the relevant quantitative and qualitative data that the actual root causes emerged.
- Inbound lead volume was up vs. the previous year (from 32 to 50 leads).
- The same was true for SQLs (13 of 50, or ~26% – below target of ~35%, but that could be noise – after all, the difference between 26% and 35% is 4 SQLs – hardly a huge number at low volumes), so neither of those were definitively the issue.
- Paid acquisition was responsible for the lion’s share of those leads – organic was stable, referral was up slightly, paid was up substantially.
- BD direct, cold/lukewarm outreach was below target, but only by ~30% – while an issue, mathematically it wasn’t sufficient to account for the lead drop-off.
- Market Conditions were choppy, but in digging through closed/won, closed/lost and open leads in the CRM, there was no indication in any notes that recession fears, macroeconomic conditions, budget freezes, project postponements, limited CapEx or anything of the sort was more or less frequent than the prior year, or any comparative period (i.e. not comparing it to Q1 2020 or Q1 2021).
As it turns out, the primary issue was a simple one: no follow up to a substantial portion of inbound leads, from one new channel that was brought online late in Q4. There was an email workflow built, leads were appended and qualified in the CRM, but the follow-on outreach was never done. That’s it. ~40% of the total inbound leads (~20) were never contacted directly by a member of the BD team.
When that data point is added, the picture becomes clear:
- Inbound leads were far more qualified than initially believed: 13 SQLs / 30 Contacted Leads = 43.3% qualification rate, vs. 26% initially believed.
- Lower Direct Outreach = true, but insignificant relative to other issues
- Macro Conditions = completely false, as more contacted prospects were moving through qualification stages than before
This exercise was only effective because the self-scouting was grounded in actual data, not assumptions or narratives. The same is true in ad accounts, campaigns, etc. – it’s easy to tell a story around some observations and facts. It’s exponentially more difficult – but exponentially more valuable – to compile the relevant data and determine what’s actually causing the issue.
7. Think Holistically + Comprehensively:
As the above example illustrates, it is in our nature to seek proximate causes for observed phenomena – pipeline is down because leads are down or BD outreach is down (the closest proximate causes).
The more I do self-scouting, the more I’ve come to realize that proximate causes are rarely root causes; you have to look deeper and broader than you initially believe in order to find the drivers. The larger and more complex the organization, the longer, harder and deeper you need to dig.
My suggestion for addressing this is to put yourself into the mind of the customer/client/prospect – not just behind the keyboard.
This is the Stanislavsky System, but for marketing/business. The gist/history (you know, so you can be more fun at trivia night): Konstantin Stanislavsky was the Cofounder of the Moscow Art Theater, during which time he developed a system for how actors could more convincingly embody the characters they were portraying, using the “Magic If”: what would the actor do if s/he were in the circumstances of the characters they were playing. His protege, Lee Strasberg, built on this core concept, eventually developing the “Method”, which has been used by many of the world’s most acclaimed actors (Al Pacino, James Dean, Marilyn Monroe, Paul Newman, Dustin Hoffman, Angelina Jolie & Scarlett Johansson, to name a few).
What these approaches have in common – and what you should take from them – is the notion of doing what your subject – your target audience – would do, not what you think they should (or will) do.
A concrete example of this – and something I’ve shared multiple times throughout this newsletter – is our approach to SERP research. I demand that our team actually query the keywords we are bidding on in Google and/or Microsoft Ads. I expect them to examine the SERPs. To scrutinize the ads on the page. To review the organic listings. To check out the SERP features. Why? Because all of that is what our audience sees – and if we don’t know what they’re seeing, we can’t act in a manner that maximizes our client’s chances of standing out.
And once they’re done reviewing the SERP, I expect them to go through the customer journey on both our site AND our competitor’s site. Ask critical questions. Spot gaps or raise questions that someone in our audience’s position (mindset, information level, challenges, goals, etc.) would have. For many clients, we go even further: we book calls with the sales team AND attend them. We purchase products (and pay!), just to experience the post-conversion flow and test the product ourselves. We even moved banks so we could understand what the migration process was like, and what it was like to use their systems / work with their team.
Is that overkill? Maybe. But anything worth doing is worth doing brilliantly well. I firmly believe that this type of self-scouting makes us better marketers and better partners to each of our clients – and that alone justifies the cost involved.
For a concrete example:
We had a local client that sold licensed wall/fan art (think something like Displate), primarily to gamers. It was a very cool, differentiated business. This client came to us because they were struggling to get any traction on Meta/Google, despite having a competitive price point, exceptional creative assets and a clean, straightforward user experience. It didn’t matter how large the discount (they were trying $10 off $50, 20% off, even 50% off (which would make their art nearly 60% cheaper than their competitors) – none of it was working. People weren’t buying.
As we started to review everything, we noticed some anomalies in the data, specifically between page 2 (confirm order and address) and page 3 (enter CC and complete transaction) of their checkout flow: the dropoff was 90%. Only 1 in 10 people who added their address went on to purchase. That’s….staggeringly low. We thought it had to be a technical issue, so we tested the daylights out of it. Everything was fine. Tax, shipping, etc. all calculated correctly, no issues on any device types, no rendering problems, no page speed problems, nothing. We compared offerings, pricing, etc. to major competitors – no difference. We went through the checkout process. No difference. We even went so far as to order pieces from both their store and their competitors – and our client’s checkout, packaging, unboxing and post-conversion flows were all significantly better.
We were missing something. It was absolutely maddening.
Here’s where the method above comes in: we pulled the customer analytics for where their abandoned carts were located – it was everywhere EXCEPT local (within ~30 miles). Interesting.
So, what were people seeing everywhere else that we were not? We fired up the VPN, and viewed the page from LA. Nothing. No difference. Then we realized that if we’re in LA, we’re certainly not shipping this to Baltimore (the address we were using) – we’re shipping it to where we live – in LA. So we added an LA address.
Boom.
The issue hit us like a Mack truck: the shipping cost for a ~$50 piece of art was being calculated at $94. It turns out that the shipping API was misconfigured, leading to non-local products being quoted at international shipping rates. Once that was fixed, checkout completion rates returned to normal.
The only reason we caught that issue was because we examined the problem from the perspective of the people who were experiencing it. And while we didn’t cause the issue, we were able to resolve it only once we did the self-scouting for our client.
As marketers or business owners, it’s natural to fall in love with your work. It’s easy to see the solution from a helicopter above the maze (and easier still if you’re the one who designed the maze in the first place). It’s considerably more difficult to see the solution when you’re standing in front of the maze, wondering which way to go. By putting ourselves in the same position as our audience, we can experience the breakpoints for ourselves, then act to fix them.
This gets to the heart of self-scouting: it’s about looking at what we’ve done from another perspective, with the goal of uncovering ways to make it better in the future.
There’s a reason most high-performing teams invest inordinate amounts of time and energy on self-scouting: it gives them an edge AND it blunts the impact of competitor counters: if you fix a problem before a competitor can capitalize, you’ve won twice: once because you thwarted their effort, and twice because the time/money/energy they spent devising that effort is lost / not able to be used on a different approach/strategy.
We all know that both our audience and our competition are evolving at accelerating rates; we know that the bar for remarkable always rises, and that every day is a fight against gravity. Self-Scouting is one tactic you can (and should) employ to gain an edge.
I firmly believe that self-scouting is akin to going to the gym: it’s miserable at first. It flat-out sucks. But as you integrate it into your routine (I recommend doing it at least quarterly, if not more frequently), you’ll grow to love it. You’ll find it sharpens your strategic thinking, it broadens your perspective, it makes you a better marketer and a better leader. Think of it like “Marketer Optimization” – just as CRO is a structured process for website/app improvement, self-scouting is a structured process for improving yourself (and your clients).
It’s not always fun. It certainly isn’t always pleasant. But problems exist independent of your awareness of them; you can choose to bury your head in the sand, or to proactively seek them out and address them.
Happy scouting!
-Sam