Something strange is happening online, and even the CEO of OpenAI has noticed. Sam Altman says real people are beginning to sound like chatbots.
In a post on X (formerly Twitter), Altman admitted he often finds himself assuming discussions on Reddit or Twitter are full of bots even when the posters are actually human. “Real people have picked up quirks of LLM-speak,” he wrote, referring to the language patterns of large language models (LLMs) like ChatGPT.
Altman said that when he checked Reddit threads about OpenAI’s Codex coding tool, genuine users were debating its features. But the tone was so uniform, so machine-like, that it felt fake. “I assume it’s all fake/bots,” he said, “but it’s actually real humans.”
WHY ONLINE SPEECH FEELS “FAKE”?
Altman pointed to a few reasons why online spaces suddenly feel less authentic:
- The “Extremely Online” crowd tends to adopt the same style of posting, which often mirrors the way LLMs write.
- The internet’s hype cycles swinging between “it’s over” and “we’re so back”, push people toward exaggerated language.
- Social platforms reward engagement at all costs, encouraging content that sounds polished, simplified, or algorithm-friendly.
The result? Forums, especially those about AI, can feel staged. Posts blend together in a way that makes it difficult to tell whether a human or a bot is behind the keyboard.
THE ETHICAL DILEMMA
On one hand, the fact that people are picking up “AI quirks” isn’t surprising. We mimic the tools we use every day. Just as texting shaped our shorthand and emojis changed how we express emotion, AI is now influencing syntax, tone, and even rhetorical habits.
On the other hand, there’s something unsettling about Altman’s observation. If everything reads like it came from a chatbot, then the internet risks becoming a hall of mirrors—one where it’s impossible to know whether we’re engaging with a genuine human perspective or just an echo of machine-style phrasing.
For a platform like Reddit or Twitter, this creates a feeling of unreality. For law and policy, it poses risks around consent, misrepresentation, and informed decision-making.
THE BIGGER PICTURE: LANGUAGE, POWER, AND DEMOCRATIC LIFE
Altman’s casual remark also points to something deeper than stylistic drift. Language has always been a site of power. When legal scholars debate precedent, when citizens petition governments, or when courts interpret contracts, the nuances of human expression matter. If AI begins shaping not just how we search or draft, but how we sound to one another, the ripple effects extend far beyond Reddit threads.
In democratic societies, authentic voice is tied to legitimacy. Citizens rely on the ability to tell who is speaking, with what authority, and for what purpose. If every comment starts to feel like an echo of algorithmic phrasing, it risks dulling the edge of dissent, flattening diversity of expression, and weakening the human element that underpins public discourse.
This is why Altman’s concern resonates so strongly. It is not just about whether the internet feels “fake.” It is about whether our collective conversations about law, governance, or culture retain the richness and unpredictability of human thought. Without safeguards, we may wake up in a world where trust in language itself is eroded, making it harder for courts, regulators, and citizens to separate genuine intention from machine-mediated noise.
REFERENCES