Skip to content

AI Visibility is like cutting an onion.

    Done wrong, it makes you cry.

    AI Visibility is one of those topics that feels familiar at first glance – and completely slips away on closer inspection. Anyone who tries to approach it with a classic SEO mindset – rankings and search volume – will quickly get frustrated. The numbers don’t line up. Volatility feels arbitrary. And in the end, you’re left staring at a KPI that explains nothing.

    This is not an execution problem.
    It’s a thinking problem.

    AI Visibility does not follow the logic of classical search. It is not a linear metric, but a qualitative phenomenon. Ignore that, and you’re cutting onions without proper technique – and wondering why your eyes are burning.


    TL;DR – The key points

    • Quality over quantity
      AI Visibility is not a traffic metric. It measures perception. Applying linear SEO KPIs to it will inevitably fail (the “crying factor”).
    • The AI KPI funnel
      Visibility emerges in layers: Entity Salience (does the model know the facts?), Mention (are we being mentioned?), and Recommendation (are we being recommended – and why?).
    • Market research, not SEO
      LLMs are treated like test subjects. Methods from advertising effectiveness research (aided / unaided questioning) replace pure keyword tracking.
    • The trust shift
      Trust is no longer created by position (rank #1), but by narrative (reasoning). Brand search often serves to validate these narratives.
    • Embracing limitations
      Personalization and fragmentation make complete tracking impossible. Results are qualitative samples, not a full census.
    • An addition, not a replacement
      This model does not replace classical performance tracking. It complements it with the qualitative dimension of AI perception.

    The falling tree in the dark forest

    In classical search, we at least had an idea of how many people were standing in the forest. Search volume was our anchor. The path of demand – the keyword – was visible.

    In AI Search (LLMs, Google AI Overviews, chatbots), that forest is dark and fragmented. There are no longer collective keyword streams. Every query is highly personalized. Every answer depends on context, conversation history, and framing.

    And this is where the old philosophical question becomes relevant:
    If a tree falls in a forest – and no one is there to hear it – has it really fallen?

    In AI Search, the answer is: no.

    No prompt, no event.
    No inference, no visibility.
    No interaction, no perception.

    Search volume as a stable reference simply no longer exists.

    Visibility does not exist “by default” on a list. It exists only in the moment of interaction. Anyone trying to force quantitative rankings here is measuring ghosts.

    That is the first reason for the tears:
    the loss of quantitative control.


    The onion method: the AI KPI funnel

    If visibility only exists situationally, we have to stop treating it like a traffic flow. AI Visibility is not a road. It is perception.

    And perception has layers.

    Layer 1: Entity Salience – the foundation

    Before we talk about rankings or mentions, we have to ask a more basic question:
    Does the system even know us?

    Entity Salience describes whether – and how correctly – a brand exists in the AI’s internal world model. Does the model associate our brand with the right products, capabilities, and positioning?

    The obvious failure case is hallucination.
    The more subtle – and still dangerous – one is incompleteness.

    If a model only knows part of our portfolio or reduces us to an outdated role, the picture becomes distorted. Entity Salience is not binary. It is gradual. And so far, not cleanly measurable.

    That leaves only one option: qualitative judgment.
    Uncomfortable, but necessary.


    Layer 2: Mention – presence

    Are we mentioned at all in relevant scenarios?
    Do we show up in the consideration set?

    At this stage, only bare presence matters – comparable to classic share of voice. Necessary, but not sufficient.


    Layer 3: Recommendation – influence

    This is the real currency of AI Search.

    The difference between
    “Brand X also exists”
    and
    “I recommend Brand X because …”

    Recommendation is a qualitative leap – and always competitive. If an AI lists ten solutions, a single mention is barely valuable. What matters is:

    • Are we recommended as the solution or as one of many?
    • Is the recommendation justified?
    • Is there a citation (link)?

    Only then does a recommendation turn into potentially measurable impact.


    The quality dimension: Stability & Context

    The three layers do not stand on their own. Their relevance only becomes clear through stability and context. These dimensions are not another funnel stage, but the litmus test for all layers.

    Stability – the market research approach

    We treat the LLM like a test subject.

    Unaided questioning:
    “Name the best tools for X.” – Do we appear spontaneously?

    Aided questioning:
    “What do you think of Brand Y compared to Z?” – How stable is the evaluation once we’re mentioned?

    A single hit is not a signal. It’s noise. Only recurring patterns across scenarios matter.


    Context – frame analysis

    In which semantic frame do we appear?

    Are we the affordable option for beginners?
    The premium solution for professionals?
    The specialist or the generalist?

    This analysis is purely qualitative. But it determines whether a mention is an asset or a liability.


    From positional to narrative trust

    In classical search, trust was conveyed positionally. Rank #1 implied authority.

    In AI Search, trust shifts into the narrative. The machine explains its recommendation. Trust is spelled out.

    That feels more reasoned – but often less transparent to users. Hallucinations included. This is why users frequently validate via brand search, cross-checking the machine’s judgment against human experience.

    Rising brand search volumes – especially combined with terms like “reviews,” “pricing,” or “experience” – can therefore indicate successful AI Visibility. Causality is never provable in isolation. But the pattern is observable.


    The Propanethial S-oxide of measurability

    When cutting onions, we cry because of a gas we can’t see.

    In AI Visibility, these are the blind spots:

    • missing clickstream data
    • invisible zero-click sessions
    • hardly any consistent referrers from chat interfaces

    This invisibility – often called “dark AI traffic” – creates uncertainty. Many teams respond with a reflex for control. More tracking. More models. More hope that the gaps can somehow be closed.

    But this problem is inherent.

    The strategic response is not to close every gap – that is often technically impossible – but to build a robust indicator system that accepts uncertainty instead of fighting it.


    The limits of the model

    We also need to be honest about one thing:
    This model does not solve the problem of direct business impact measurement.

    There is no formula that translates a mention into revenue.
    And that’s okay.

    AI Visibility measures perception.
    Revenue measures transaction.

    The layers of the onion – Entity Salience, Mention, Recommendation – are leading indicators. They are qualitative. They show how the machine thinks and speaks about a brand.

    Traffic, leads, and revenue are lagging indicators. They are quantitative. They show what actually materializes in the business.

    AI Visibility metrics are therefore not a classical steering instrument. They are early warning systems. They surface shifts in the engine room long before they show up in traffic, or sales.

    Anyone trying to translate these signals directly into revenue numbers will fail. Causality is never provable in isolation. Correlation, however, is systematically observable – and usable.

    Not as a replacement for performance measurement, but as an additional input for attribution models that think beyond last click.


    Conclusion: Measure more consciously. Understand better.

    AI Visibility is limited as a quantitative control metric.
    Its value lies in qualitative insight.

    Anyone chasing perfection here will end up crying.
    Those who use it as a sentiment seismograph – an early warning system for shifts in perception – gain orientation.

    Visibility is no longer a place on a list.
    It is a place in the machine’s narrative.

    The decisive question is no longer:
    “What rank are we on?”

    But:
    “How does the machine talk about us – and why?”


    More Notes On Search?