The Operation
OpenAI's February 2026 disruption report documented a cluster of influence operations across multiple geographies. One of them, designated Operation No Bell by investigators, operated in sub-Saharan Africa. Its product was geopolitical commentary. Its distribution channel was regional African news websites. Its source: a scholar whose academic credentials were entirely fabricated.
The operation used AI-generated content to produce articles attributed to this invented academic. The articles were placed on local news platforms where they appeared alongside genuine reporting. Social media reach was described as limited. The operation ranked at the lower end of OpenAI's internal impact scale. What it lacked in scale, it demonstrated in methodology. The scaffolding was precise: a named author, plausible institutional affiliation, and a publication record assembled to give the content the appearance of independent expertise.
Further reading: APA Dictionary of Psychology
The Mechanism: Authority Manufacturing
The tactic is called authority manufacturing. It operates on a simple premise: audiences do not evaluate arguments on their merits. They evaluate them through the filter of who appears to be making them. A claim attributed to a named professor at a recognized institution carries more weight than the identical claim made anonymously, regardless of the underlying evidence. This is authority bias: the cognitive shortcut that treats credential as a proxy for correctness.
Authority manufacturing exploits this shortcut by constructing false credentials and routing propaganda through them. The operation does not need to change what the audience believes through argument. It needs to change who the audience believes is speaking. Once the source appears legitimate, the content inherits that legitimacy automatically.
The specific implementation varies. In the Cold War, Soviet active measures planted fabricated documents and attributed them to credible Western sources. In contemporary information operations, the construction is more granular: a name, a photo, institutional affiliations, a citation history, a social media presence with enough posts to appear real. The goal is to pass a threshold of surface credibility sufficient to prevent most readers from looking further.
"The question is never whether the expert is real. The question is whether the audience has reason to check. When they do not, fabricated authority is functionally indistinguishable from the genuine article."
The Evidence
OpenAI's report is worth reading as a structural document rather than a technology story. The AI component was a content production tool. The influence architecture underneath it was standard: a fabricated persona, a publication placement strategy, and a distribution network. What the report reveals is how modular modern influence operations have become. The persona layer, the content layer, and the distribution layer are each independently constructed and can be assembled quickly.
Operation No Bell targeted a specific geography and a specific audience: readers of regional African news who had no particular reason to question an academic byline on a geopolitical commentary piece. The operation did not need sophisticated psychological targeting. It needed only to bypass the basic credibility check that most readers perform and rarely complete.
The articles appeared on real websites. Real readers encountered them. Whether the content shifted any opinion is unknown. The mechanism, however, was fully operational. The scaffolding worked exactly as designed until OpenAI flagged the accounts generating the content and documented the pattern.
The Counter-Read
Authority manufacturing fails when recipients actively verify the source rather than passively accept it. The check is simple and almost never performed: search the expert's name independently, look for their institutional page, find papers they have published through databases that predate the current operation, identify peers who cite their work. A fabricated academic has a thin trail because fabricated trails cost time and resources to build. The deeper you look, the more the scaffolding shows.
The structural tell is always in the asymmetry between apparent expertise and verifiable output. A genuine academic leaves years of cross-referencing: students, conference appearances, peer citations, institutional records. A manufactured persona leaves a surface layer and nothing behind it. The check takes three minutes. The cost of not performing it is consuming and potentially forwarding content designed specifically to bypass your judgment.
Markers of This Tactic
- Credentialed byline on content that advances a specific geopolitical position without acknowledging competing frameworks
- Institutional affiliation that is plausible but vague, a "Center for" something without a searchable address or faculty directory
- No verifiable citation trail: the expert is never cited by other experts in their field
- Publication history confined to outlets that do not have editorial review processes for contributor credentials
- Content that is technically polished but analytically shallow: it reads like expertise but does not withstand the questions a genuine expert would face
- Sudden appearance: the expert's public record begins recently with no archived presence before the operation launched
The Takeaway
The lesson from Operation No Bell is not about AI. AI made the content cheaper to produce at scale. The underlying operation, a fabricated expert laundering state-preferred narratives through the credibility of apparent scholarship, has been a documented influence tactic for decades. The mechanism works because authority bias is structural. We are trained from childhood to defer to credentialed sources. We are not trained to verify whether those credentials exist.
The practical implication runs in both directions. When you encounter expert opinion that supports a position someone powerful benefits from, the minimum response is verification. When you are evaluating the credibility of any source that appears in a context where credibility matters, the surface signal is not the answer. The surface signal is the question.