For most of my 25 years in PR, many brands and folks in the biz have assumed visibility equals influence. If a brand appeared frequently enough, in the right outlets, influence would follow.
That logic was always spurious, but it breaks down badly in the AI-search era.
As search, research, and comparison increasingly happen through AI systems and summary-driven interfaces, who structures and influences those outputs matters more than ever. This is most obviously true in new categories, but it holds for established ones too.
There is an important distinction being included in an answer and being foundational to the answer’s construction.
We have been thinking a lot about this lately as a result of seeing a client, Fraimic, who are in a brand new category, get a ton of love at CES last week. Their category is so new that the way it is explained is being built, especially by media and on LLMs. All of which has major consequences for Generative Engine Optimization work.
We know this foundational level of influence is useful as we have seen other clients of ours benefit from having driven the structure of category dialogue. Keyboard maestros, Keychron, who helped bring mechanical keyboards to the masses, are the most obvious from our client roster.
I’d heard of a few people talking about a concept called Share of Explanation, as a way of capturing this kind of impact and influence. Let me try and unpack our version of this idea.
What does Share of Explanation measure?
Share of Explanation is a fundamentally qualitative metric. It measures how often a brand’s framing, logic, and problem definition are used when a category is explained, regardless of whether the brand itself is mentioned (that latter part is important).
A brand has high Share of Explanation when:
- Journalists explain the category using its framing
- Analysts evaluate the market using its criteria
- Competitors are positioned relative to its definitions
- (Perhaps most importantly) AI systems summarize the category using its conceptual structure
Think of this as influence at the level of meaning, not exposure. At its core it means you define the playing field, and not simply that you play on the field or (ideally) win the game.
The academic foundation: why explanation matters
The basis of this line of thinking is grounded in established media and cognition research. Let’s nerd out for a moment and try to bridge a few areas of research.
Framing theory
Robert Entman’s work defines framing as the process by which communicators select aspects of reality and make them more salient to promote a particular interpretation of a problem.
Framing shapes understanding before persuasion occurs. Share of Explanation is a way to observe framing power at the brand level.
Agenda-setting and second-level agenda-setting
Work done in the 1970s by researchers Maxwell McCombs and Donald Shaw on agenda-setting theory demonstrated that media influences what people think about. Later work on second-level agenda-setting showed that media also shapes how issues are characterized.
Brands with high Share of Explanation influence second-level agenda-setting. They define the attributes and logic through which the category is understood.
How AI systems amplify dominant explanations
We know that AI platforms are pattern matching monkeys who work at crazy scale. Indeed research from Stanford’s Center for Research on Foundation Models shows that large language models rely heavily on frequency, cross-source agreement, and repeated explanatory patterns when generating confident outputs.
LLMs do not privilege novelty (to their great detriment, in some ways). They privilege stability. Explanations that appear repeatedly across authoritative sources are more likely to be reused, summarized, and treated as canonical.
This trifecta of research findings is why Share of Explanation compounds.
Share of Explanation vs Share of Answer (important distinction)
You’ve probably heard about Share of Answer. Do not confuse it with Share of Explanation. These two concepts are related but not interchangeable.
Share of Answer (SoA)
- Measures whether a brand appears in AI-generated responses
- Output-focused
- Binary or percentage-based
- Sensitive to prompt phrasing and recency
So in other words, SoA answers the question, “Was the brand included in an AI answer?”
Share of Explanation (SoE)
SoE Measures whether a brand’s framing shapes the response
- Structural and upstream
- Independent of brand mention
- And how it persists even when competitors are named
SoE addresses, “Whose logic is the AI answer built on?”
A brand could, conceivably, have low Share of Answer and still have high Share of Explanation if its worldview defines how the category is described, though it is harder to imagine. Certainly the inverse is true – many brands appear in answers without really shaping them.
How to assess Share of Explanation step by step
A bit of humility is in order here. The team and I are still working this out and playing around with it. We are doing this manually, which is far from rigorous.
Nonetheless, here are some steps to take if you want to assess SoE:
Step 1: Define the brand’s explanatory logic
Document, in plain language:
- Your definition of the category
- The primary problem hierarchy
- The evaluation criteria the brand emphasizes
We’re trying to get at reasoning here, not just marketing messaging, so avoid bullshit.
Step 2: Analyze category coverage without the brand
Review recent category press coverage, trend pieces, and comparisons that do not mention your brand.
Look for:
- Problem framing
- What is treated as important
- What trade-offs are emphasized
Even when you are not present is the dialogue framed in a way you would frame it?
Step 3: Track framing reuse, not attribution
This is about how much your lexicon spreads.
You want to identify:
- Repeated phrases or metaphors introduced by the brand
- Familiar problem definitions
- Evaluation frameworks that mirror the brand’s worldview
Explanation adoption is durable and sets the rules of the game.
Step 4: Compare AI-generated category summaries
Prompt AI systems with neutral category questions over time across key prompts based on AI search volumes (most good LLM monitoring tools should have this feature).
Assess:
- Consistency of framing
- Problem prioritization
- Alignment with the brand’s explanatory structure
As I said earlier, LLMs reflect dominant explanations, not marketing intent.
Step 5: Score explanation dominance qualitatively
- Low SoE: category explanations are fragmented or competitor-led
- Moderate SoE: brand logic appears inconsistently
- High SoE: brand framing feels like the default explanation
In doing this, you are getting at narrative gravity, not just presence and viz.
Share of Explanation’s durability
If our work on ourselves and our clients is to be believed, mentions on LLMs can vary based on a myriad of factors. Explanations, however, are more durable. Once a brand’s framing becomes the default way a category is understood, every new article, summary, or AI-generated response reinforces it, even when competitors are featured.
That is how brands become inevitable before they become dominant. It is a wonderful heuristic for challenger brands looking to redefine categories, and incumbents seeking to protect their castles from invaders and upstarts.