Aggregate data is a great way to build bad strategy, especially in a realm like generative engine optimization
That Cheshire-cat grin you can’t escape seeing all over LinkedIn?
That’s the PR industry after Muck Rack dropped its recent What AI is Reading report.
For an industry that frequently has the living shit kicked out of it by journalists, influencers, clients, CMOs and, heck, even lawyers, the data was a moment to celebrate. It affirmed that, in fact, PR is the key player in the new world of generative engine optimization and AI search.
While the report pleased me, I was struck by a range of findings that were a big departure from conclusions of other GEO studies about what AI cites, and what we’ve seen on a micro level for clients at our firm.
Before I go any further, it’s worth saying that Muck Rack’s study looks at aggregate trends. Even when they get into the very valuable work of looking at citations for categories like travel, tech, business, health etc, they are looking at large clusters of sub-categories and grouping them together.
This sort of exercise is valuable, but far from the whole story and – if our work is to be believed – it alone should not be the entire basis of a GEO program for a client (to be clear, Muck Rack never claimed it should, but I want to call this out because there is potential for confusion and for CMOs to burn money easily).
Different perspective = different findings
If we look at ChatGPT citations, Muck Rack found the following 6 media sources were mostly frequently scraped in aggregate (in no particular order): Reuters, AP, Axios, FT.com, Time and Forbes.
This all makes intuitive sense. These are big, authoritative outlets and people trust them.
However, when I looked at data from Scrunch for a consumer tech client of ours, the story was different. Our client is beating all of its competitors in terms of how often it shows up in unbranded AI searches, but not a SINGLE ONE of the outlets Muck Rack found is scraped at any sort of level that matters.
In fact, this client – who shows up in a whopping 85+% of unbranded ChatGPT searches relevant to its category – did so on the back of its presence in small sites such as Rtings, and DockUniverse, its own website and two tech outlets that were not in Muck Rack’s top outlets list.
You might say I am comparing apples and bananas in my example. That I need to look at Muck Rack’s data on tech-related queries since this is a tech client. And, logically, if that was your line of reasoning, you should have a point.
But when we look at what Muck Rack found were the most scraped sites for tech-specific queries on ChatGPT, we again see that exactly NONE of those listed in the study (LinkedIn, Molested, Arxiv, Wikipedia) show up in our own data and rank in terms of what’s propelling our client to outperform its bigger competitors in AI search.
OK, but this is one example…
I said the same thing, so I pulled more data from Scrunch. I looked at a smart eyewear company, a sleep company, a teched-out sneaker, and an ergo-office client.
In not one case did the main sources scraped by ChatGPT mirror what Muck Rack found.
Where am I going?
I am not debating the validity of Muck Rack’s findings. Nor am I questioning the validity of my own findings using Scrunch. And I am certainly not bashing Scrunch of Muck Rack – indeed our firm is a client of both and we like each very much.
GEO and hyper-specificity
The point is, if you want to run a PR program aimed at AI optimization, you need to get granular and build the strategy based on information very tailored to your segment and sub-segment.
Aggregate info is a great starting point, but what is working here may not work over there.
There is always a tendency to want to simplify complexity. Indeed, this is the basis for a lot of tech communications work (and has allowed me to pay mortgages faithfully for years). However, the picture that’s emerging with GEO is nuanced and complex and, the data says, resistant to any kind of cookie-cutter strategies.
You need to keep this in mind as you delve in, hire staff or agencies, and hear, more frequently, about what worked for someone else. Campaigns aimed at AI optimization have to be super particular and individual. These are things good PR has always been about, but sometimes noise – even happy noise such as the What AI is Reading report – can lead to oversimplification and bad strategy.