
You don’t use a hammer for brain surgery. Nor a bazooka to kill a mouse.
But if you work in PR long enough, someone will try to measure your performance by article count (i.e. “We want X articles per month”)
It’s obtuse, reductive and prioritizes output over impact.
Look, I understand why people come up with this kind of bad KPI. They seek simplicity, and, typically, have not bothered to include their agency in the KPI design phase. Indeed, agencies also bear a big share of the blame here for not establishing intelligent KPIs from the get-go.
Output vs impact in the AI era
The obvious folly in “We want X articles per month” is that media placements are not created equal. I am not referring to differences in audience size, sentiment and tone of the piece, or unique monthly views (UVM). Rather that pieces of earned media have varying impacts on bigger outcomes and should be judged based on these.
This has always been true, but as we enter an era where AI engines are the window to the internet and determine how and when a brand gets surfaced in AI answers, media placement impact has new meaning and necessitates new measures.
Here are a few modern KPIs we are thinking about:
Scrape impact, aka citation volume
You get written up by Forbes, Great stuff! But how much is the piece cited or scraped by AI? That’s what this KPI measures.
It is an important reflection of the value of any placement. And it goes beyond “We got in Forbes,” to “We got in the right place in Forbes,” because in the new era all articles at the same outlet are not created equal.
How do you measure this?
Most GEO tools on the market have this data. The one we use is Scrunch.
Persistence in AI surfacing
Muckrack’s recent report on what AI is reading sent a clear message: some queries pull from recent news articles and some from evergreen articles. This metric assesses how long a placement shows up as a source for answers on AI engines.
We have seen several Rtings pieces getting cited in unbranded client-related AI answers for 3 months straight. Other articles in what you would think are big Tier 1 outlets have been cited for a week and then disappear. This is an important evaluator of value and should be a KPI because there has to be a distinction made between coverage with longer ROI windows and disposable stuff.
Author authority
This KPI evaluates media placements through the lens of how valued and respected by AI engines weigh specific authors.
Because AI, like humans, understands that some authors are subject-matter experts and some dilettantes and thus tends to source their content more in answers.
To use a real-world example, AI understands that Jacob Ridley or Michael Berk know more about keyboards than some rando writing their first blog post on the topic in mommy’s basement.
Honorable mention – share of answer
Share of answer assesses how often you show up in answers across a range of industry-relevant prompts versus competitors.
It is an important, if bigger picture, metric for modern PR. The issue is, factors beyond earned media affect share of answer. So, while important, it cannot be used in isolation as a PR-for-GEO program evaluator.
Any of the major GEO tools around today will assess this for you. Look at how it changes over time.
Why these metrics matter (reprise)
How you evaluate coverage has a sneaky way of determining the coverage you get, and thus the value you derive from it.
The first three of these metrics get at impact and address nuance better than simply asking and measuring “how much did we get?” The fourth – share of answer – is an important overlay to assessing overall performance.
As with a lot of sophisticated metrics, these require tools and a bit more time input. But that is worth paying if you are spending on a PR program (which is never cheap).
Whatever you do, don’t treat your PR program metrics like a lead pipe you use to bludgeon rodents with.