If close rate is incomplete — and it is — then the question is what comes next.
Not as a concept. As a calculation. Something that runs on data the operation already has, produces a number that can be tracked monthly, and changes what managers actually see when they look at rep performance.
That number is Retention-Adjusted Close Rate. And almost every home improvement operation has the raw inputs to build it. The problem is not the data. It is that the data has never been in the same place at the same time.
The data exists. The connection between the data does not. That is the only thing standing between close rate and the number that finishes it.
The Inputs
Three numbers go into Retention-Adjusted Close Rate. Every home improvement operation tracks all three. None of them are in the same report.
The reason most operations never build this metric is not that any of these numbers is hard to find. It is that finding all three and connecting them to the same rep, from the same period, in the same view — that work has never been done.
The Calculation
The math is straightforward once the inputs are connected.
28 retained jobs ÷ 100 demonstrations run = 28% retention-adjusted close rate
The 12-point gap between those two numbers is not an abstraction. It is 12 jobs that consumed lead cost, sales cost, manager attention, financing time, and operational resources before they disappeared. Twelve jobs the standard close rate counted as wins.
The business ran those jobs. The business paid for those jobs. The business got nothing back from those jobs. And the metric that runs the sales operation never recorded any of it.
What It Looks Like by Rep
The number becomes useful when it is broken out by rep. That is where the management implications become impossible to ignore.
A rep closing at 42 percent with a 31 percent cancel rate retains approximately 29 percent. A rep closing at 28 percent with a 6 percent cancel rate retains approximately 26 percent. On close rate alone, the first rep looks substantially stronger. On Retention-Adjusted Close Rate, the gap nearly disappears — and the cost structure of the first rep's cancelled jobs makes the comparison even less favorable.
That is not a coaching insight. That is a lead allocation decision. A compensation decision. A territory decision. The rep who looks like the obvious choice for the best leads may not be the right choice once cancellations enter the picture.
Most operations will never know this. Because the calculation has never been run.
Where It Breaks Down in Practice
The obstacle is specific. It is almost always the same obstacle.
Cancellations are not flagged at the rep level. They are tracked as a total count — how many jobs cancelled this month — without being connected back to the rep who closed them, the lead source that produced them, or the demonstration that started the chain. The number exists. The attribution does not.
Without attribution, the calculation produces a business-level metric but not a rep-level one. And a business-level metric does not change management decisions. A rep-level metric does.
This is why the infrastructure question matters more than the math question. The formula takes thirty seconds to understand. Building the attribution layer — connecting cancellations to reps, to lead sources, to product categories, consistently, every month — is what most operations have never done and what makes the metric permanently usable rather than a one-time exercise.
The Moment of Recognition
There is a specific moment that happens when an operation sees Retention-Adjusted Close Rate by rep for the first time.
The ranked list changes. The rep who has been at the top of the board for two years is no longer at the top. The rep who has been quietly producing middle-of-the-road close rate numbers moves up. The conversation in the room changes. Someone says something like: we have been giving the best leads to the wrong person.
That moment is not comfortable. It is also not a failure. It is the first time the operation has seen an accurate picture of what its sales team is actually producing.
Most systems will show the close rate. Very few will show what happened after.
What Comes Next
Retention-Adjusted Close Rate is not the end of the analysis. It is the beginning of it.
Once it is running by rep, run it by lead source. Some sources that look efficient on cost-per-lead produce high cancel rates that destroy their economics downstream. Some sources that look expensive on cost-per-lead produce almost no cancellations — making their true cost-per-retained-revenue significantly lower than it appears.
Then run it by product category. Walk-in tub cancellations do not look like one-day transformation cancellations. Full remodel cancellations do not look like tub-to-shower conversion cancellations. The cancel rate pattern by product tells a different story than the blended number — and the blended number is what almost every operation is using to make product mix decisions.
Each breakdown reveals a dimension of performance that the standard metrics cannot show. That is not because the standard metrics are broken. It is because they were never designed to follow the job long enough to tell the whole story.
The calculation is not the hard part. The hard part is doing it consistently — by rep, by source, by product, every month — until it changes how the business makes decisions.
That consistency is what turns a metric into a management tool. And a management tool is what changes outcomes.
The data is already there. It has been there the whole time.
The only question is whether it ever gets connected.
Revenue Intelligence · Verisyn HQ
See your Retention-Adjusted Close Rate by rep, by source, and by product — without building the infrastructure yourself.
Show Me My Retention-Adjusted Close Rate →Also from Verisyn HQ
Close Rate Is the Most Dangerous Metric in Home Improvement → The Cancel Rate Problem Home Improvement Contractors Are Measuring Wrong → How to Calculate Your True Cost-Per-Acquisition by Lead Source →Also from Remodelspeak
The close rate number your sales manager is watching is probably wrong →