Skip to Content

The RAMP Method For Website Optimization

by Sam Tomlinson
March 5, 2024

This issue was inspired by a number of website RFPs that we’ve participated in. Each RFP requested a “best-in-class” site – but then proceeded to describe (in detail) how their previous sites were launched between 3-5 years ago, and maintained by their team ever since. 

What I’ve just described is a “stair” approach to website / landing page development and management: 

To be blunt: this is a phenomenal way to light money on fire, for three reasons: 

  1. The Bar Is Always Rising: your target audience’s expectations aren’t set by your brand or your competitive set; they’re set by the largest and most sophisticated brands on the planet – and those brands are hell-bent on continuous, relentless, seize-every-possible-opportunity optimization. The end result: expectations are always rising; what was remarkable in 2021 is table stakes in 2024. 
  1. Diminishing Returns Are A Silent Killer: let’s assume that when you launched your best-in-class website, it actually was (spoiler alert: it probably wasn’t) – and it performed as such, with exceptionally high CVRs, solid organic rankings and so on. But, over time, the features that made your site remarkable were co-opted by your competitors, causing your site’s performance to slowly (but surely) deteriorate to the point where you felt compelled to invest in a new site. Here’s the thing: there are years of suboptimal performance that are glossed over, but which could easily add up to millions in lost revenue. 

Consider this example, adapted from a real brand in a lead generation industry: 

2020202120222023Net Change
Ad Spend$1.2M$1.4M$1.75M$1.85M+$650k
Total Visits:496,062530,948594,551578,947+82,885
Total Val$32.3M$34.2M$31.7M$25.3M-$7M

For the first two years of the “new” website, everything looked groovy: conversion rates were well above normal, traffic and conversion volume were rising, and revenue was growing. Then: competitors caught up and re-launched their sites (2022), and by 2023, this brand was just above average in terms of conversion rate. 

All the while, the brand was forced to compensate for declining overall performance with increased ad spend (+$650k/yr) – and even that wasn’t enough to save them from a $7M (in total, not in real) dollar decline. 

In fact, if the brand would have maintained their 2020 conversion rate in 2023 – changing nothing else – the site would have generated approx. $43.7M in value. The failure to evolve cost this brand a total of over $30M from 2020 through 2023. Yikes. 

  1. There’s a Boatload of Risk: the stair method is one of the riskiest ways to manage your online presence, because all of your proverbial eggs are in a single basket: the new website. If that flops – for whatever reason – you’re in an unpleasant place without a paddle. 

This risk is compounded by the very nature of this approach: since you’re not doing anything to evolve your site in the intervening period, the data and knowledge base on which you can base new site design, UI/UX and experience decisions is (at best) antiquated and (at worst) flat-out wrong. 

The number of organizations – including agencies! –  taking this approach to their website is gobsmacking; I’d go so far as to say this is the predominant form of website management. 

The alternative to this is what I’ve termed the RAMP² Method: 

R: Research

A: Apply

M: Measure

P: Pivot + Persist

Here’s how it works, and how you can apply it to your organization and/or your client’s websites: 

R: Research

The extent of most website research is installing GA4 (or similar) on the site – something which, if not supplemented, does more harm than good because it creates the illusion of accountability and data-driven decision-making. I’ve been in more meetings than I can count where the response to a website data question is, “Of course we analyze website performance! We have Google Analytics installed!” 

This isn’t me poo-pooing GA4 (you should definitely have it installed + properly configured); to the contrary, this is me saying that GA4 alone is woefully inadequate if your goal is to maintain a remarkable, best-in-class website. 

Here’s what else you need: 

  1. Heatmaps – I absolutely love heatmaps and screen recordings for anything related to website optimization, for four reasons:
    1. Visualize User Behavior – there’s a material difference between seeing numbers on an analytics screen and an actual, color-coded heatmap of your site. A great heatmap shows how users move through your site – from what they look at, to what they hover over, to where they click. This data can be used to identify and prioritize areas for optimization. 
    2. Reveal Hidden Bottlenecks – Most websites are littered with hidden bottlenecks, dead-ends or confusing user paths, all of which are near-invisible in standard web analytics platforms (unless you really want to get into behavior flow reports). 
    3. Prioritize High-Impact Changes – most brands spend way too much time and effort tinkering with components of a page that are rarely, if ever, seen by their target audience. A heatmap enables you to visualize where your audience spends their time – what they look at, where they click, how they engage and what they ignore. This data enables you to prioritize your changes based on what is likely to have the greatest impact – whether it’s optimizing a prominent header section, or addressing an oft-ignored form. 
    4. Validate Test Results – Finally, heatmaps are critical for validating test results and experimental changes across your site. A heatmap enables you to see the modifications in user behavior (not just performance metrics) that resulted from the changes made. 

Heatmaps are even more effective when combined with session recordings, which enable you to actually review what a specific user did during a specific session. This is especially powerful when combined with feedback surveys (more on that later), as this is among the most powerful and effective ways for finding issues long before they become problems. 

My two favorite Heatmap providers (no affiliation, just love) are: 

  1. – if you’re an ecommerce business, their revenue-based heatmaps are absolutely phenomenal. The ability to visualize not just click activity, but revenue generated, on an element-by-element basis is a game-changer. I’ve yet to find anything else on the market that provides this level of granular, revenue-centric analysis. 
  2. Microsoft Clarity – If you are not an ecommerce business, then you probably can get by just fine with Microsoft Clarity – it’s among the best heatmap solutions around, and it’s 100% free. 

2. User Surveys – I’ll never understand why so few brands run on-site user surveys. They are, bar-none, one of the most effective (from both a cost and results perspective) ways to learn more about your audience and how they interact with your brand. There are many different ways to run these, from small corner slide-ins to pop-ups to full-screen questionnaires (maybe don’t do those) – all of them help provide additional qualitative insights that can be used to validate hypotheses, identify issues and ultimately, deliver a superior experience. 

While I love open-ended questions, I believe that website experience surveys are best if kept simple, with 2, 3, 5 & 10 point scales:

2 Point Scale Example: Thumbs Up / Thumbs Down 

3 Point Scale Example: Happy Face – Neutral Face – Sad Face 

5 Point Scale Example: Rate Your Experience on the site, from 1* to 5* 

10 Point Scale Example: How likely are you to recommend this site to a friend (1 = not at all likely; 10 = extremely likely) – this is a standard NPS methodology

What takes a user survey to the next level is an integration with your Google Analytics and/or Heatmap data – something that’s quite easy to do, but rarely done. This allows you to (for instance) compare heatmaps of people who gave it a Thumbs Down vs. a Thumbs Up, or review session recordings of people who clicked the sad face to see what caused the bad experience. You can also pair this insight with (as an example) negative event data in GA4 (I wrote about that here), to get even more insight on points of friction. 

3. Competitive Analysis – Finally, and perhaps most critically, is ongoing competitive analysis. Done properly, this should go well beyond simply visiting your competitors’ sites, and include detailed information on: 

  • User Flow – how does the target audience move through their site? What features and functionalities have they put into place to support this flow? 
  • Page Structure – how are their pages structured? What content types are where (i.e. prominent statements, proof points, testimonials, statistics, feature call-outs, etc.)? How do these compare with your site? 
  • Capture Points – how are their forms structured? What questions are asked? How (if at all) is the data captured re-deployed into a post-conversion experience (this will require you to complete some forms!) 
  • Friction Points – no site is perfect; where is this site falling short? What are the issues? 
  • Offers & Angles – I wrote about this extensively in Keeping Up with the Joneses, but it bears repeating here: how are your competitors positioning their offers to their audience? 

Organize all of your findings into a Google Sheet – make a tab for recommended tests, a tab for “neat-but-not-for-us”, and another one for “issues/friction points”. Finally, I’d highly recommend creating one for “best-in-class” experiences from any industry, and encouraging everyone on your team to share examples, screenshots and links, along with the specific features/functionality that they found exceptional. 

A: Apply

All of the research in the world is useless unless it can be applied. In the case of websites, this means lots of iterative testing – from the radical (a brand new homepage concept) to the iterative (a font size/color change to a prominent but under-viewed component). 

In an ideal world, you should have three kinds of changes being made to your website on a regular cadence: 

  1. Regulatory: Ongoing site accessibility testing is a good life choice – it’s low-cost, low-effort and can keep you out of a lot of legal hot water. We use AccessibleWeb, but there are plenty of quality options out there. 
  2. Error Fixes: No matter how hard you try, there will always be random issues with your site, whether it’s broken links or weird glitches or dead-end pages or something else. Negative Events can help you identify those, but you still need to fix them on a regular cadence. 
  3. Iterative Improvements: (1) and (2) are focused on keeping your site functioning at maximum capacity; this one is focused on increasing that capacity. I recommend following the 10% or 10X approach: either focus on making a clear, measurable, incremental improvement, or take a big swing and try something new. For most brands, you should have website tests (either site-wide, on a specific page, on a group of pages) running regularly (monthly is ideal, if possible). 

While Google Optimize is no longer around (RIP), there are plenty of high-quality A/B and Multivariate testing platforms out there – from VWO to Optimizely to Omniconvert to AB Tasty. 

They’re all pretty comparable – so just find one you’re comfortable with and actually use it.

M: Measure

This one is pretty self-explanatory: you have to measure the impacts of your tests – both from a quantitative and qualitative standpoint.

My recommended solution is to maintain a Google Sheet (or something more formal, if you’re fancy) with each test you’ve run, the quantitative and qualitative results, and the step(s) taken following the test (was it implemented across the site? Inconclusive? Re-run? Failed?). 

Finally, don’t be afraid to re-test things – I often see organizations run a test years ago, have it fail or prove inconclusive, and then never test it again. Here’s the thing: people change. Technology changes. What failed yesterday might just work today – so re-test. Validate your learnings over and over again. 

P2: Pivot + Persist

The final component of this plan is both the simplest & the most difficult: pivoting & persisting. 

Pivot – continuous improvement is rarely (if ever) linear and predictable; it’s volatile. You’ll have to pivot your approach based on the competitive landscape, emerging technologies and shifting consumer/user preferences.

Persist – most continuous improvement programs fail not because they were unsuccessful, but because they were inconsistent: after a few failed tests, the plug gets pulled. This is the exact opposite of what should happen. Failed tests are a good thing: they validate that what’s currently on the site is better than the best thing your team could come up with – that’s great news! 

Finally, and most importantly, remember that as your site’s performance improves, the probability that a subsequent test will provide incremental performance lift declines. There are only so many possible permutations of a site, experience, structure, etc. – and once you’re close to optimal, the 10% improvements are difficult to come by (which is why you should pivot to doing more 10x tests!). 

If you follow this method, you’ll end up with website performance that looks more like a ramp: gradual improvements over time, along with a treasure trove of customer + audience insights that can be leveraged across your organization + strategic priorities. 

That’s all for this issue!


Related Insights