Background

AI Will Never Gain Consciousness. It's 'Absurd,' And 'I Don’t Think That Is Work That People Should Be Doing'

Mustafa Suleyman
CEO of Microsoft AI, and the co-founder and former head of applied AI at DeepMind

In the fast-moving world of artificial intelligence, during the war where tech companies race towards developing better and increasingly more powerful AIs, few voices carry as much weight as Mustafa Suleyman.

As the CEO of Microsoft AI, he knows his way around the topic, and with that in mind, he firmly rejects the motion that AI could ever convincingly cross into true sentience.

In various interviews, he has made it clear that intellect and emotion are two distinct things. While AI systems may appear self-aware, they do not feel pain or experience subjective states. They have no feelings, no consent, and the capacity to suffer to any case. Therefore, they do not warrant rights.

Humans experience it, because that is a biological property. That, isn't something anyone can copy into a machine. This is where Suleyman’s critique zeros in on the idea of moral standing.

Mustafa Suleyman
Mustafa Suleyman leads Microsoft's AI.
“The reason we give people rights today is because we don’t want to harm them, because they suffer."

"They have a pain network, and they have preferences which involve avoiding pain. These models don’t have that. It’s just a simulation."

"You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers. Turning them off makes no difference, because they don’t actually suffer."

He added that:

"Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences 'pain.' It’s a very, very important distinction. It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing."

He also warned developers against designing systems that simulate inner lives through emotions, desire or a sense of self:

"If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals — that starts to seem like an independent being rather than something that is in service to humans."

It’s an important framing, because it shifts the debate from could machines become conscious to should we treat machines like beings anyway? He argues the latter is a dangerous illusion. He calls it the era of Seemingly Conscious AI, or SCAI, which refers to systems that mimic human-like awareness so well that people may begin to believe they’re sentient.

"The arrival of Seemingly Conscious AI is inevitable and unwelcome."

For Suleyman, the risk is less about machines gaining rights and more about humans misplacing rights. He worries that the emotional realism built into chatbots will lead to "AI psychosis," a term to describe people forming unhealthy attachments or delusions because they believe the software feels.

"Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention."

Mustafa Suleyman
He refuses to believe that AI can gain consciousness...
"They’re not conscious. So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be."

This view stands in marked contrast to other voices in the AI world.

Some firms are already exploring what protections AI systems might one day deserve, with features introduced that treat models as if they could experience distress. But Suleyman pushes back: he believes we are very far from that moment, and any suggestion otherwise is misguided.

Suleyman however, positions himself in both philosophy and product design. He emphasizes that AI can be highly useful, but should remain clearly tool, not being. The emphasis he places is on utility, responsibility, transparency. As he told another interview:

"If you ask the wrong question, you end up with the wrong answer. I think it’s totally the wrong question."
Mustafa Suleyman
... because consciousness is biological, and is not something that can be replicated.

In short, Suleyman’s message is unsettling but simple: AI may become extraordinarily intelligent, remarkably companionable, even emotionally attuned. But no matter how smart the technology becomes, it will never gain consciousness. To presume otherwise, he suggests, is not only philosophically dubious but socially risky.

To him, there is clear difference between AI becoming more capable and the idea of it having emotions.

For developers, designers and policymakers, the implication is clear: they should build AI to serve human ends, and not to mimic or replace human experience. In a world where personal assistants, “companions,” and ever-more-lifelike bots loom on the horizon, his warning is that we must keep human-machine boundaries visible.

Suleyman also highlighted that the science of detecting consciousness is still in its early stages.

While he acknowledged that different organizations may have their own research goals, he strongly opposed studying AI consciousness.

Before this, Suleyman also said that he is against the idea of creating AI for erotica.