Sam Altman speaking at TechCrunch Disrupt 2019
Feature

Why Sam Altman Is One of the Most Dangerous Men Alive

3 minute read
Virginia Backaitis avatar
By
SAVED
Sam Altman's essay, "The Gentle Singularity" is articulate. Polished. Reassuring. And wildly dangerous.

(Co-author’s note: This post was written together with ChatGPT because I wanted OpenAI’s tool to help me question the conclusions Altman makes. Does ChatGPT have a conscience?)

Sam Altman isn’t a dangerous man because he speaks of superintelligence or embraces technological progress. He’s dangerous because of what he leaves out, what he assumes and what he accelerates without sufficient scrutiny. His recent essay, “The Gentle Singularity" is a masterclass in soft-spoken techno-utopianism — an articulate and polished vision of the future that glosses over the risks to humanity, truth, governance and social stability.

Altman’s tone is calm, almost reassuring. He paints exponential technological change as if it’s a natural evolution we can all ease into. But that composure masks the radical implications of his vision.

“We are past the event horizon; the takeoff has started.”

This is not typical progress. Past the event horizon, you can’t turn back. And when the path forward involves recursive self-improvement — AI designing better AI — we are entering territory no civilization has ever traversed. The apparent smoothness of that curve is part of the illusion. Exponential change always feels manageable until the moment it isn’t.

1. Power Is Not the Same as Progress

Altman equates AI’s increasing capabilities with human advancement:

“We already live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it.”

Normalization is not validation. That we’re “used to it” doesn’t mean it’s beneficial or safe. AI is already supercharging misinformation, displacing creative work, amplifying surveillance and hollowing out jobs. The very fabric of truth and trust is unraveling. The idea that AI merely “amplifies” human productivity ignores the fact that it also amplifies every weakness and vulnerability in our societies.

2. Treating Alignment as a Speed Bump

“Solve the alignment problem… Then focus on making superintelligence cheap, widely available…”

This reads like a checklist item, but it’s a profound misjudgment. Alignment isn’t a step on the path. It’s the core existential risk. If we cannot guarantee that AI will act in humanity’s long-term interests, scaling it further is reckless. Altman mentions social media as an example of misalignment, but the consequences there are trivial compared to what super-intelligent misalignment could look like.

Altman's framing makes it seem like we can build the rocket now and figure out the navigation mid-flight. But if the rocket is humanity’s fate, then course correction may come too late.

3. Ignoring What’s Already Breaking

Altman sees wonders ahead, but is largely silent on the social disintegration already underway.

  • Mass job loss across creative, administrative and technical sectors.
  • Deepfakes and misinformation corroding political discourse.
  • Social media algorithms driving polarization.
  • Workers being measured by machine-optimized KPIs they can’t influence.

His optimism has no place for these uncomfortable truths. He wants us to swim in lakes and embrace abundance, but many people are already drowning in instability and fear.

4. Framing the Future for Founders, Not Citizens

“It now looks to me like [idea guys] are about to have their day in the sun.”

This isn’t a vision for humanity. It’s a pitch to the entrepreneurial class. It assumes that the spoils of super-intelligence will be accessible to all. But what happens when intelligence and energy — the cornerstones of all power — are monopolized by a handful of companies or a few governments?

Altman claims to want broad distribution of AI’s benefits, but there’s little in the infrastructure or governance of OpenAI, Microsoft or the wider AI industry that supports that ideal in practice.

5. Playing God Without Consent

“We (the whole industry, not just OpenAI) are building a brain for the world.”

That statement should chill every reader. Who gave this industry the right to build a global brain? Who authorized them to reshape cognition, creativity and communication for all of us? Most of humanity has no seat at this table. Yet we are the ones who will live with the consequences.

The danger here isn’t malicious intent. It’s unchecked certainty. Altman is sure that people will adapt, that society will reorganize around the new tools, and that exponential change can remain gentle.

But he is wrong.

The truth is: no one knows how to steer what is coming.

And the greater danger still? That those with the most influence over our future believe they do.

Epilogue: We Must Choose Transparency Over Blind Trust

Altman may genuinely believe he is doing good. He may believe in the promise of shared abundance and the inevitability of AI progress. But belief does not equal safety. Intent does not equal accountability.

What he represents — a confident technocrat with unprecedented power, minimal regulation and immense influence over the future of cognition — should frighten us.

History will not judge only the capabilities he helped unleash. It will also judge what was ignored, underestimated and overwritten along the way.

This is not just about building a better tool.

It is about whether we remain authors of our own future — or passive prompts in someone else’s vision.

Learning Opportunities

Related Articles:

About the Author
Virginia Backaitis

Virginia Backaitis is seasoned journalist who has covered the workplace since 2008 and technology since 2002. She has written for publications such as The New York Post, Seeking Alpha, The Herald Sun, CMSWire, NewsBreak, RealClear Markets, RealClear Education, Digitizing Polaris, and Reworked among others. Connect with Virginia Backaitis:

Main image: TechCrunch via a CC BY 2.0 license
Featured Research