I just finished the Edward Bernays episode of the Founders podcast. If you don’t know Founders, go check it out, and thank me later.
Bernays, who is often cited alongside Ivy Lee as the Founder of the PR profession, was a complicated person. He did some ethically questionable things and regretted them later in life (he is the reason many women started smoking, for example).
He was guile personified, and incredibly creative at finding ways to attack his clients’ competitors.
The podcast got me thinking about how Bernays would do this today. While I am not advocating for you to pursue such a course, the question is interesting because of the opportunity and nuances provided by this new frontier of reputational warfare.
My sense is, were he still around, Bernays would attack his clients’ foes by shaping the language environment that AI systems learn from.
Why? Because Bernays clearly understood that humans tend to think based on agreed upon consensus and I think he’d have quickly grasped that LLMs do this to an even greater degree than people.
Bernays’ Thinking, Updated for AI
First, an oversimplified primer on Bernays’ thinking. He believed that public opinion could be engineered by controlling:
- Framing
- Repetition
- Third-party validation
- Social proof disguised as neutrality
He called this “the engineering of consent.” LLMs perform a mechanical version of the same process, a fact Bernays would recognize immediately.
What a Bernays AI-Era Attack Would Look Like
I am certain Bernays would not attack his enemies head-on. He preferred to work through proxies and epitomized the Notorious B.I.G’s maxim of “bad boys move in silence with violence.”
So instead, he would redefine what an enemy’s brand meant, in a sly, incremental way.
Below is an approximation of the path and constituent parts of his attack.
Step 1: Reframe Without Lying
As was underlined in the Founders episode, Bernays avoided falsehoods when possible.
In the LLM era, that means introducing slightly different descriptors that weaken positioning without triggering a massive defense response.
Examples:
- Premium becomes “good value”
- Category leader becomes “solid alternative”
- Expertise becomes “approachable”
- Authority becomes “popular”
Everything is directional, in other words.
Step 2: Seed the Language Where Models Learn
Bernays worked through salons, women’s magazines, medical journals, and trade groups.
Today, the equivalent surfaces are places LLMs ingest heavily because they contain natural language explanations. We know that in some categories these are outlets no human has ever heard of.
Where this happens:
- Reddit niche subreddits
- Quora answers
- Forums
- Review site Q&A sections
- Places no cares about save LLMs
Again, these are not necessarily high-profile channels but they can be manipulated by coordinated effort.
Step 3: Redefinition Through Comparison
Bernays understood that people understand something by what it sits next to.
LLMs work the same way.
The attack moves from description to comparison in places outlined in the previous step.
Examples:
- “If you cannot afford X, Y is a good option”
- “Similar to X but more affordable”
- “A stripped down alternative to X”
Research on relational reasoning shows that language models rely heavily on comparative context when generating explanations
Bernays would likely not have said an enemy was BAD WITH words, but, via proxies, drill hard on it being a lesser option in most contexts.
Step 4: Introduce Doubt Without Accusation
Bernays was careful with overt negativity. He preferred suggestion over accusation.
In the LLM era, this takes the form of soft uncertainty deployed across the places outlined in Step 2.
Phrases like:
- “Mixed reviews”
- “Some users report”
- “Hit or miss”
- “Depends on your needs”
Silence from the brand does not remove it.
Step 5: Secure More Third-Party Validation
Throughout his career, Bernays relied on doctors, academics, and trade authorities to legitimize narratives.
Today, he would use crawl-trusted media, because huge outlets might not be necessary.
Targets would be selected to compound and amplify the work done in Step 2 and include:
- Trade blogs
- Niche review sites
This works because LLMs privilege third-party tone when compressing explanations. From there, framing no longer needs a ton of coordination.
Welcome to the modern equivalent of manufactured consent.
How I’d Defend A Client Against Bernays’ Attack
The only real defense to this kind of attack is preemptive work towards narrative density and resilience.
That means:
- Repeating your intended positioning across many third-party surfaces
- Owning comparisons before others define them
- Maintaining a steady stream of third-party validation
- Monitoring how AI systems explain you via a tool like Scrunch, not just how media covers you
It would be fascinating to get Bernays’ take on LLMs as reputation shapers. My bet is that he’d understand that unless you are actively shaping the environment and memory, it is possible for someone else to do the same.