10% + 10x Philosophy
Let’s talk about testing – controversial, I know. But, most brands are testing the wrong things in the wrong ways, and killing their growth in the process.
The traditional approach:
Most brands’ testing strategies focus exclusively on incrementalism: making step-wise improvements over a sustained period of time through A/B and/or multivariate testing. These tests tend to isolate a single variable (or variable set) – ranging from imagery or messaging to font color or CTA, then attempt to find a statistically significant improvement. In practice, this typically results in performance gains of +0% to +25% (a significant number of tests we see don’t produce an improvement, or are inconclusive).
There’s nothing inherently wrong with this approach – but depending on your brand’s stage, it’s often a waste of time, money & energy, especially for emerging/small brands (sub-$25M/yr).
To understand why, let me illustrate with an ecommerce example (and note that the reasoning holds true for B2B / Lead Gen businesses as well):
Assume a pre-scale brand doing $5M/yr in online revenue converts at 2% (a global average conversion rate) with an AOV of $250. 30% of sales are from existing customers, while the remaining 70% are new-to-brand (again, pretty standard). In this scenario, the brand is adding an average of 1,167 new customers per month on ~58,334 visits per month through a single funnel.
The brand decides to run an A/B test to improve conversion rate, and succeeds – improving by 10% (from 2.0% to 2.2%) over a ~30 day period (including set-up and implementation). That adds ~116 orders per month on the same traffic (a HUGE win!); so they double down, run another test, and succeed again (+10% more – going from 2.2% to 2.42%) – another +128 orders/month. This leads to another green-lit test – though this one comes back inconclusive or with no lift. In total, the brand has spent ~3 months increasing their total orders by ~244 orders per month.
This approach is repeated for the following quarter, with some success and some failure (with some experimental versions even performing at a lower level than the base variant). All in all, the brand ends up improving Conversion Rate to 2.50% over the time period in question. While this seems well & good, the reality is that the opportunity cost of these experiments is staggering – and likely outweighs the benefits gained if the brand is seeking to scale.
To illustrate, let me highlight the three major downsides of this approach, all of which aren’t immediately obvious, but can (and will) cripple the brand’s long-term growth:
- Local v. Global Maxima – The first (and most significant) issue is that the brand never tested if they were solving for a global maxima or a local maxima – and as a result, spent 6 months’ (and likely $150k+ in ad spend) improving a suboptimal lander. The payoff – while initially significant (~+73k in new revenue/month) – is relatively small compared to what this brand needs in order to scale to the next level (~8 figures), and candidly, doesn’t have a viable path to getting them there any time soon. Put another way: Incremental improvements take you to the top of the mountain you’re currently on. If that’s Mt. Everest, awesome. If that’s Cadillac Mountain (elevation of only 1,500 feet) – less awesome. The only way you can find out if you’re on the right mountain is to take some MAJOR detours
- Mean Regression – In mathematical terms, mean regression is the tendency for a random variable that is outside the “norm” the first time you measure it to be closer to the norm at a subsequent measurement. This happens for a variety of reasons, though the primary reasons are luck, the novelty effect and sampling error.
- Summit vs. Slope – finally (and keeping with the mountain analogy), there’s what I’ve termed the “summit v. slope” issue – which is to say, the closer you get to a maxima (the “summit”), the more likely it is that a subsequent step will produce an *inferior* outcome. Or, to put it in more concrete terms: at some point, you’ve made the best ad you can make for a given offer + angle.
What’s the alternative?
Imagine that the same brand spent those 6+ months testing radically different offers, angles or experiences – two of which fell flat (+0% improvement) and one of which resulted in a conversion rate of 7% (or higher) – a true “leap forward” for the brand (and one which provides a viable path for the brand reaching the $10M revenue threshold.
These are the “big swings” that most brands should be prioritizing – even though they: (a) feel reckless/uncertain; (b) violate the scientific method (where we isolate a single variable) and (c) produce results that tend to be difficult to explain (i.e. we don’t know what caused the change, or how to replicate the change in subsequent efforts). Despite all that, the big swings are the ones that drive big outcomes – and for most brands, those are what propels them forward. Rather than continually try to take steps up the mountain, think of it as calling the helicopter and getting dropped right by the summit. Sure, there’s some work to do after, but it’s a LOT easier and faster than trying to climb the whole mountain.
Again, this doesn’t mean that 10%, incremental improvements are bad (they aren’t!); the reality is that they must be tempered with 10x bets. This is no different than how modern optimization algorithms function (most are coded to force the algorithm to take massive “detours” in an effort to identify other maximas which aren’t readily apparent).
Exponentialism works for one simple reason: incrementalism takes time – and time is the enemy of every brand trying to scale. Every day you aren’t jumping forward is a day your competition has an opportunity to overtake you and/or reinforce their relationship with your target audience.
This is why I advocate every brand we work with adopt the 10% or 10x Philosophy for testing: Every test you run – every bet you make – should be focused on one of two objectives:
The 10%: Enabling a component of your strategy – whether that’s email, SMS, paid social, paid search, retail media, page content, whatever – to perform 10% better on a go-forward basis. This is an extension of the aggregation of marginal gains: stacking small, incremental gains results in compounding over time. This is exactly what was illustrated in the above example.
The 10x: Fundamentally revolutionizing an aspect of your business such that it performs 10x better than what you have today. That might be a new product, offer or angle. It might be unlocking a new platform, creative type or channel (like SMS or retail media). Whatever it is, the goal is simple: find something that improves an aspect of your business by 10x vs. what is currently in place.
Think of the 10% or 10x philosophy as tempering “finding a faster horse” with “inventing the automobile.”
Implementing the 10% or 10x Philosophy:
To start, every test you implement should be categorized as either a 10% bet or a 10x bet, using the definitions above. The balance of those bets (10% vs. 10x) should vary based on the stage + scale of your company/business unit/organization relative to your addressable market.
At the ends of the spectrum:
Early Stage: For companies that are trying to find and validate PMF? Almost everything should be a 10x bet. If a brand is sub-$5M in revenue, just about everything should be a big swing. Don’t bother with 10% tests because the revenue numbers aren’t big enough for them to matter if they hit. You’d have to compound 10% growth for a LONG time to reach meaningful scale – and that’s time most brands don’t have. This is where the brand illustrated above should have lived – taking big swings on high-leverage items.
Late Stage: For established companies with massive customer bases, 80%-90% of tests should be focused on optimizing what you have; the remaining 10% – 20% are the moonshots to propel your organization into the next S-curve. This is exemplified by many market leaders today (Apple, Amazon, Alphabet) – each of them has a heavy focus on incremental optimization, because their core products have already identified the global maxima and improvements at the margins provide staggering value creation (i.e. even a 1% improvement on a $10B business component is a LOT of money).
And where most brands fall: in the middle: there should be an even distribution of tests – which means at least 50% (yes, half!) of tests run should be 10x swings. And only once you hit on a 10x test should you pivot to 10%/incremental experimentation. Unfortunately, for almost every brand I see, there’s no-where near that level of big-swing experimentation. And that’s a core cause of stagnating growth.
Returning to the example above:
Let’s imagine the brand follows this philosophy above, and takes 5 10x bets, followed by a single 10% bet once a 10x hits. Let’s assume the first 4 10x bets fail (+0%) and the 5th hits (conversion rate goes to 7%, as noted above). That single change, holding everything else constant, results in nearly 2x net-new orders added (+2,916) in a single month than the incremental progress does over the full, 6-month duration:
|Month||Improvement||CVR||Cumulative New Orders|
Taking this one step further (and yes, there are PLENTY of brands that convert at 5%+) – imagine the brand uses the 6th month to test a 10% incremental improvement, and only achieves a 5% lift (going from 7.00% to 7.35%) – that’s ANOTHER 204 orders. That’s the kind of leap-forward progress that puts this brand on pace to reach 8-figures in revenue (a huge milestone).
This invariably leads to, “Well, what are those big swings? How can I take them?” – so, here are some examples of high-leverage bets:
- Offer – this is the single-most impactful lever for most brands (and the one our hypothetical brand above should have been testing). This isn’t just what you’re selling (the product itself) or how you’re doing it (discount, gift with purchase, free shipping, bundle, etc.), but how you’re supporting it.
- Audience / Angle – your angle + audience goes hand-in-hand with your offer; it’s how you position the offer + connect it with your target audience.
- Platform / Channel – we’ve seen brands unlock staggering levels of growth by successfully either (a) opening a new platform or (b) funneling everything into a single, high-performing channel.
- Experience – this runs the gamut – from how you help potential customers find the right product to how you re-engage those who either have converted (loyalty + retention) to those who have not (re-engagement). In an increasingly commoditized ecommerce ecosystem, experience is an underrated way brands can stand out + drive outsized value.
Here’s the simple reality: most brands should be making a LOT more high-leverage bets than they are today. If you don’t believe me, look at the last ~10 tests you’ve run. I’m willing to bet that most (if not all) fall into the 10% bucket. This is your imperative to change that. Balance your small bets with big swings (and take more big swings, especially if you’re a smaller (sub-$100M) brand.
Once you’ve adopted this framework, testing & optimization becomes exponentially easier and more productive — because you’ve now set the parameters for the test + established the conditions for success. If you want to do something that doesn’t fit into one of these buckets? Don’t, until you can refine or expand it to meet one of those two criteria (10x or 10%). This forces you to focus and become crystal clear on what you’re trying to do (find a local maxima or a global maxima), and avoids the pitfalls of A/B-only testing.