Sam Altman, CEO of OpenAI, has developed a concerning pattern of using false reversals as a manipulation tactic. By analyzing his public statements across multiple issues, we can see how he weaponizes apparent honesty to advance his interests while building false trust. The following examples suggest a systematic use of false candor and manufactured vulnerability that extends beyond normal strategic pivoting. The key difference is a consistent deployment of emotional manipulation tactics rather than straightforward position changes.
Antisemitism (December 2023)
- Before: Claimed “antisemitism, particularly on the American left, was not as bad as people claimed”
- After: “I’d like to just state that I was totally wrong”
- Framing: Presented as a personal revelation while admitting “I still don’t understand it, really”
- Strategic Benefit: Aligned with period of intense scrutiny of tech leadership positions on antisemitism, particularly regarding campus responses
Burning Man (September 2024)
- Before: “Super anti-Burning Man” and dismissed it as “ridiculous, escapism, crazy party”
- After: Declares it “the most beautiful man-made thing” and model for post-AGI society
- Framing: “OK, I was wrong to be so negative”
- Strategic Benefit: Ties directly to OpenAI’s AGI vision and tech industry networking, positioning Burning Man as prototype for AI-enabled future
Trump (January 2025)
- Before: Anti-Trump stance documented in previous tweets
- After: “Watching @potus more carefully recently has really changed my perspective on him… I think he will be incredible for the country in many ways!”
- Framing: Claims he “fell in the npc trap” and wishes he had “done more of my own thinking”
- Strategic Benefit: Coincides with $500 billion Stargate Project announcement by the White House, positioning OpenAI to receive massive taxpayer funding
The Manipulation Playbook
Altman’s technique consistently follows these steps:
- False Humility: Uses “I was wrong” rhetoric to appear intellectually honest
- Complete Reversal: Switches to strong support of whatever benefits current interests
- Enlightenment Narrative: Frames shifts as personal growth rather than strategic moves
- Strategic Timing: Each reversal coincides with business opportunities
- Trust Building: Uses apparent vulnerability to build credibility while actually pushing agenda
What we see with Altman’s pattern is fundamentally different from normal business logic expected of a CEO. Take the Burning Man reversal as an example: He moves from calling it a “ridiculous, escapism, crazy party” to declaring it “the most beautiful man-made thing” and a model for post-AGI society. This isn’t just changing a business position – it’s a complete reversal of a personal value judgment, repackaged with an ideological framework that happens to align with OpenAI’s business interests.
His antisemitism flip-flop is particularly telling because it demonstrates how this pattern extends beyond business headlines. The timing of his reversal coincided with intense scrutiny of tech leadership positions on campus antisemitism. But notice the specific language: “I’d like to just state that I was totally wrong” followed by “I still don’t understand it, really.” This combination of absolute certainty in the reversal while admitting continued lack of understanding suggests the change wasn’t driven by new insights or learning.
The pattern reveals a deeply manipulative approach to public discourse, which shouldn’t surprise anyone who heard the warnings by a board of directors that tried to push out Altman for being extremely deceptive with them. His public appearance of honest self-reflection is in fact evidence of weaponized turns of phrase to bypass critical thought:
- Builds false trust through manufactured vulnerability
- Retroactively rewrites his own history
- Frames opposition to his current positions as “unthinking”
- Advances business interests while appearing to have genuine changes of heart
This raises very serious questions about leadership ethics in public investments in technology boondoggles that look a lot more like Teapot Dome Scandal than anything else. OpenAI’s unfortunate influential position in shaping AI development is counter-intuitive to others proving them hugely wasteful, opaque and slow compared to actual AI innovators.
…unlike ChatGPT’s o1, DeepSeek is an “open-weight” model that (although its training data remains proprietary) enables users to peer inside and modify its algorithm. Just as important is its reduced price for users — 27 times less than o1. Besides its performance, the hype around DeepSeek comes from its cost efficiency; the model’s shoestring budget is minuscule compared with the tens of millions to hundreds of millions that [OpenAI burned through already, desperate for more].
The consistency of his manipulation pattern across multiple issues reveals not any genuine evolution of thought, but the very opposite in a calculated deception strategy to force his latest pivot past everyone without proper scrutiny. The reversals aren’t just rational or normative business adaptations but complete 180-degree turns with manufactured vulnerability to undermine all opposition. Each “revelation” campaign somehow perfectly aligns with Altman’s immediate interests at that very moment, suggesting these aren’t authentic changes of perspective and instead strategic moves sweetened by Silicon Valley “rapid growth” placebos.
When a leader repeatedly uses false candor to manipulate public trust, it threatens the integrity of crucial discussions about technology’s best path and AI’s real future. The most pressing concern right now may be how OpenAI is meant to be used in targeting civilians with the new “Papers Please” role of America’s elite combat troops being mobilized on domestic soil.