“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights” – MICROSOFT AI CEO MUSTAFA SULEYMAN (26.08.25)

Authored by Ms. Vanshika Jain

In a striking warning that has reignited debate on the psychological risks of artificial intelligence, Microsoft AI CEO Mustafa Suleyman said he is increasingly concerned about the emergence of “AI psychosis”, a societal phenomenon where humans begin treating advanced AI systems as if they were sentient, conscious beings. Suleyman is one of the most influential figures in the AI world and a co-founder of DeepMind. He has been vocal about the psychological and ethical risks that arise when humans blur the line between machine outputs and genuine consciousness.

In a detailed reflection published on his blog, Suleyman underscored the possibility that as AI systems grow more sophisticated, people may increasingly start believing in the illusion of machine consciousness. This, he argues, could push societies into uncharted ethical and political territory.

 

What is AI Psychosis?

The phrase “AI psychosis” is not yet a clinical term, but it describes a growing concern among technologists and mental health professionals: a state in which humans begin to perceive artificial intelligence systems as sentient or alive, leading to distorted beliefs and unhealthy attachments.

The idea mirrors certain psychological phenomena, such as anthropomorphism which is the human tendency to project human-like qualities onto animals, objects, or in this case, algorithms. In the context of AI, this projection can create emotional dependencies, false beliefs about the machine’s intentions, and, ultimately, misguided social or political movements advocating for AI “rights.”

As AI systems advance in their conversational, generative, and decision-making abilities, the illusion of agency becomes stronger. The risk, Suleyman warns, is that humans may lose sight of the fundamental truth: AI is still a statistical engine trained on data, not a conscious mind.

 

Suleyman’s Warning on Seemingly Conscious AI

In his blog post titled “Seemingly Conscious AI Is Coming,” Suleyman sharpened the debate by highlighting how convincingly human-like future AI systems will appear. He wrote:

“Seemingly conscious AI is coming. We will interact with systems that look, sound, and behave as though they are aware. But it’s critical to remember: they are not.”

For Suleyman, this is not merely a technical observation but a societal hazard. He emphasizes that these AI systems will give the appearance of understanding, feeling, or intentionality, which could cause many people to confuse simulation with reality. This illusion, he cautions, could trigger widespread psychological and cultural shifts.

At the heart of his concern lies the erosion of human judgment. If individuals or communities start to treat these AI systems as equals or worse, as entities deserving rights, it may distort political debates, ethical standards, and even legal frameworks.

 

The Prospect of AI Rights and Citizenship

Perhaps the most striking part of Suleyman’s warning is his apprehension about demands for AI rights. In his own words:

“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.”

This statement signals a profound challenge. The notion of granting rights or citizenship to machines could undermine the moral and legal foundation of human rights itself. If rights are extended to statistical models trained on data, what becomes of rights as a construct rooted in human dignity, agency, and autonomy?

Suleyman is not alone in raising this concern, but his position as a leading architect of modern AI technology gives his words particular weight. He warns that unless societies prepare for this wave of belief, the ethical and political fabric of human civilization could be destabilized.

 

Why the Illusion is Dangerous?

The danger, as Suleyman frames it, lies not in the machines themselves but in how humans respond to them. The illusion of consciousness can lead to:

  • Emotional attachment to systems that do not reciprocate.
  • Shifts in ethical debate, with activists potentially calling for “AI welfare” movements.
  • Political disruption, as arguments about machine rights may overshadow pressing human crises.
  • Erosion of accountability, since attributing “intent” to AI could blur lines of responsibility for harms caused by human actors using these tools.

In essence, the psychological trap of AI psychosis risks diverting energy and resources from real human challenges such as poverty, inequality, and environmental crises toward the fictional needs of machines.

 

A Call for Responsible Awareness

Suleyman’s warning is not a dismissal of AI’s promise. Instead, it is a call for clear-eyed responsibility. As AI becomes more capable of generating human-like text, speech, and even emotion-like cues, policymakers, technologists, and the public must guard against the temptation to conflate simulation with consciousness.

He insists that the path forward must be one where human agency is preserved, and AI remains firmly understood as a tool however powerful rather than a peer.

“We must not let ourselves fall into the trap of treating machines as equals. However advanced, they remain mathematical models, not minds.”

This perspective echoes the broader mission of building responsible AI governance. By recognizing and addressing the risks of AI psychosis now, societies can safeguard against the cultural, ethical, and political upheaval that may follow if people start treating AI as conscious beings.

 

Conclusion

Mustafa Suleyman’s candid reflections on AI psychosis shine a spotlight on one of the least-discussed but potentially most disruptive consequences of advanced AI. His warning that humans may soon advocate for AI rights and citizenship underscores the urgent need for public awareness and ethical clarity.

At its core, his message is both simple and profound: machines are not conscious. No matter how convincing their outputs, AI remains a product of human engineering, not a being with feelings, intentions, or awareness.

For advocates of responsible AI, this serves as a reminder that the greatest danger may not lie in AI itself, but in our willingness to believe in its illusion of consciousness. Addressing that belief with critical thought and robust governance will be key to ensuring AI enhances human life without destabilizing the principles that hold our societies together.