Why we need to stop measuring movement.
For twenty years we’ve been optimizing for clicks. We know the game: attract users, build funnels, fill pages, hold attention. Clicks were a proxy – a signal of movement, often mistaken for value.
That movement is collapsing in volume. The ability to create real impact remains. Organic Search isn’t dying.
The journey is shifting – the economics of search are shifting.
Users no longer pay with their time.
They pay with context.
They save effort – and outsource it to the model.
We’re entering the era of cognitive offloading – and with it, a search economy in which the answer matters, not the path.
TL;DR – Key takeaways
- The Efficiency Economy: From “Easy Query, Hard Work” to “Hard Query, Easy Answer.” Fewer clicks, more synthesis inside the model.
- Dumb Traffic disappears: Cleansing, not loss. Only questions with real purchase intent remain visible.
- Librarian vs. Lecturer: Trust is document-based; confidence is pattern-based. Primary meets secondary source.
- Anchor Broad, Win Narrow: Broad semantic anchoring – and sharp distinction through clear attributes.
- Parsability over Skimmability: Make facts extractable, not just scannable.
The Efficiency Economy
The drop in organic traffic has triggered the same reflex for months: more content, more comparisons, more skyscraper variants. Alternatively: blame Google – cue “click theft.”
But the real cause runs deeper.
Traffic isn’t disappearing.
The click paths to the touchpoint are shrinking.
Search used to follow Easy Query, Hard Work:
“CRM software,” ten tabs, reading, scanning, comparing. The user did the work.
Today it’s Hard Query, Easy Answer:
“CRM for a trades business with ten employees, DATEV integration, price-sensitive.” – the answer appears directly.
Cognitive effort shifts into the question. The AI does the reading, compressing, sorting.
The conclusion is sober:
A user only visits your site when they want to act – or when they need a primary source. Everything else is offloaded to the machine.
Librarian & Lecturer: The New Mental Model
AI Search runs on a duo working silently in the background:
1. The Librarian (Search) – provides primary sources
He knows nothing, but he knows where everything is.
He manages documents, prioritizes credibility, checks backlinks, context, consistency.
His currency: trust from document-level authority.
He points to primary sources – neutral, verifiable, accountable.
2. The Lecturer (LLM) – produces the secondary source
He’s read every book.
He recognizes patterns, connects concepts, speaks fluently.
His currency: confidence, fed by semantic clarity.
He’s eloquent – but prone to hallucination. Thanks to the Dunning–Kruger effect: with full conviction and zero accountability.
The two interact in a RAG loop:
Uncertainty in the lecturer → query to the librarian → clarity through primary documents.
This yields a simple but unforgiving equation:
Visibility = Trust × Confidence
High confidence, low trust:
The lecturer would talk about you – but the librarian won’t put you on the shelf. The model hallucinates or stays silent.
High trust, low confidence:
You’re documented, but semantically irrelevant. The lecturer can’t place you.
Only those who deliver both exist in AI Search.
Strategy: Attribute Mapping over Keyword Volume
The old question “How do we gain visibility through generic keywords?” is obsolete.
Generic knowledge is answered by the lecturer instantly.
What he needs are clear, specific, differentiating attributes – information gain.
Anchor Broad (Confidence)
We need to teach the model: we are a CRM.
Not through textbook definitions, but through semantic proximity: co-occurrences, digital PR, clear on-page signals.
Goal: When the model hallucinates, it should hallucinate us.
Win Narrow (Trust & Information Gain)
Competition no longer happens through keywords, but through distinction.
Not “best CRM,” but:
“Which CRM fits trades businesses with >10 employees and poor internet connectivity?”
We lose theoretical search volume – and gain relevance in hyper-personalized answers.
These granular attributes are exactly what the lecturer uses to choose us as a building block in his synthesis.
Execution: Parsability over Skimmability
For years we made content scannable for humans – rightfully so.
Now we need to make it extractable for machines.
This doesn’t start with the knowledge graph.
It starts with syntactic clarity: facts, triples, structure.
❌ Bad (Marketing Prose)
“Our CRM is great for tradespeople, even when the internet drops out.”
Too much noise. Too few information units.
✅ Good (Parsable Triple)
“Software: [Name] | Target group: Trades businesses | Feature: Offline mode | Benefit: Suitable for job sites.”
The librarian can index it.
The lecturer can understand it.
A precise TL;DR at the top of a piece serves two purposes:
It orients humans – and gives the machine a perfect chunk of context.
The End of the Attribution Illusion
As more informational work shifts into the model and the movement sensors go blind, the question remains:
What are we actually measuring?
The click was never the goal.
It was only a proxy for user movement.
If that movement no longer happens because the answer appears directly, we must retire clicks as a key metric. And measure impact instead.
New Metrics for a Recommendation-Based Economy
- Content Parsability – “Is our content extractable?”
- Entity Salience – “Does the AI know who we are?”
- AI Citations – “Do models use us as a source?”
- Share of Recommendations – “Are we mentioned in the right contexts?”
- Brand Search – “Is the recommendation working?”
- AI Assisted Conversions – “Is the traffic converting?”
These are KPIs of a search world in which traffic is not the goal but the side effect.
Conclusion: Optimize for Clarity
When volume drops, it’s not a warning sign – it’s a signal:
The system is becoming more efficient.
The new currency is recommendability.
Your website takes on a new role:
It’s no longer the marketplace for traffic – it’s the insurance against hallucination.
The place where the librarian finds facts and the lecturer recognizes reliable patterns.
The task is simple:
Become the single source of truth.
With no traffic panic.
With impact.
More Notes On Search?