In the fast-moving world of artificial intelligence, during the war where tech companies race towards developing better and increasingly more powerful AIs, few voices carry as much weight as Mustafa Suleyman.
As the CEO of Microsoft AI, he knows his way around the topic, and with that in mind, he firmly rejects the motion that AI could ever convincingly cross into true sentience.
In various interviews, he has made it clear that intellect and emotion are two distinct things. While AI systems may appear self-aware, they do not feel pain or experience subjective states. They have no feelings, no consent, and the capacity to suffer to any case. Therefore, they do not warrant rights.
Humans experience it, because that is a biological property. That, isn't something anyone can copy into a machine. This is where Suleyman’s critique zeros in on the idea of moral standing.

"They have a pain network, and they have preferences which involve avoiding pain. These models don’t have that. It’s just a simulation."
"You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers. Turning them off makes no difference, because they don’t actually suffer."
He added that:
He also warned developers against designing systems that simulate inner lives through emotions, desire or a sense of self:
It’s an important framing, because it shifts the debate from could machines become conscious to should we treat machines like beings anyway? He argues the latter is a dangerous illusion. He calls it the era of Seemingly Conscious AI, or SCAI, which refers to systems that mimic human-like awareness so well that people may begin to believe they’re sentient.
For Suleyman, the risk is less about machines gaining rights and more about humans misplacing rights. He worries that the emotional realism built into chatbots will lead to "AI psychosis," a term to describe people forming unhealthy attachments or delusions because they believe the software feels.

This view stands in marked contrast to other voices in the AI world.
Some firms are already exploring what protections AI systems might one day deserve, with features introduced that treat models as if they could experience distress. But Suleyman pushes back: he believes we are very far from that moment, and any suggestion otherwise is misguided.
Suleyman however, positions himself in both philosophy and product design. He emphasizes that AI can be highly useful, but should remain clearly tool, not being. The emphasis he places is on utility, responsibility, transparency. As he told another interview:

In short, Suleyman’s message is unsettling but simple: AI may become extraordinarily intelligent, remarkably companionable, even emotionally attuned. But no matter how smart the technology becomes, it will never gain consciousness. To presume otherwise, he suggests, is not only philosophically dubious but socially risky.
To him, there is clear difference between AI becoming more capable and the idea of it having emotions.
For developers, designers and policymakers, the implication is clear: they should build AI to serve human ends, and not to mimic or replace human experience. In a world where personal assistants, “companions,” and ever-more-lifelike bots loom on the horizon, his warning is that we must keep human-machine boundaries visible.
Suleyman also highlighted that the science of detecting consciousness is still in its early stages.
While he acknowledged that different organizations may have their own research goals, he strongly opposed studying AI consciousness.
Before this, Suleyman also said that he is against the idea of creating AI for erotica.