Background

AI Is Still Young, And Speaking About Its Welfare Is 'Both Premature, And Frankly Dangerous'

Mustafa Suleyman
CEO of Microsoft AI, co-founder and former head of applied AI at DeepMind

AI was once a phrase reserved for researchers, hobbyists, and the pages and screens of science fiction. That is no longer the case.

Today, AI models can write, speak, and even interpret video with such fluency that many mistake them for human. Since OpenAI launched ChatGPT and after others followed suit in an arms race where companies race towards making increasingly powerful and capable models, the technology is practically now in everyone's hands.

These models are accessible through the web and on mobile phones, readily available to answer any question imaginable (and unimaginable), and respond with unmatched patience.

Yet, no matter how sophisticated these systems become, or how natural their conversations may feel, this should not be confused with true consciousness.

That, is according to CEO of Microsoft AI, Mustafa Suleyman.

Mustafa Suleyman
Mustafa Suleyman thinks that humans should create AI for people, as a tool, not to be a person for us to be its god.

In a blog post, Suleyman said that:

"Some academics are beginning to explore the idea of 'model welfare', the principle that we will have 'a duty to extend moral consideration to beings that have a non-negligible chance' of, in effect, being conscious, and that as a result 'some AI systems will be welfare subjects and moral patients in the near future'. This is both premature, and frankly dangerous. All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society."

In his argument, Suleyman warns that entertaining the idea of "AI welfare" risks feeding illusions that machines are more than tools.

By suggesting that models might be conscious, or deserving of rights, researchers could deepen psychological dependencies already forming between people and chatbots, he said.

"It disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities."

Consciousness can be described as the state or quality of being aware of oneself and the surrounding environment. At its core, it is the subjective experience of existence. It's the fact that there is a "what it feels like" to be you.

Scholars and philosophers often break it down into a few overlapping dimensions. They include phenomenal consciousness, where a subject inhibits raw, subjective experience of sensations, emotions, and perceptions; access consciousness, which can be described as the ability access and use information in reasoning, decision-making, and guiding behavior; self-consciousness, which is when a subject is awareness not just of the world, but of oneself as an individual distinct from others; and reflective consciousness, in which the subject has the capacity to think about their own thoughts.

In short, consciousness is the blend of awareness, experience, and self-reflection.

So far, civilization has decided that humans, aka. homo sapiens, have special rights and privileges. Those in the animal kingdom also have some rights and protections. But consciousness is a different thing, and it isn't coterminous with these rights.

But as AIs show more and more human-like traits and intelligence, a growing number of researchers began asking the question that has only belonged to science-fiction: if ever AI models develop a subjective experience to living beings, what rights should they have?

This so-called "AI welfare" is already dividing technologists, ethicists, and corporate leaders, a new axis of division within society over AI rights in a "world already roiling with polarized arguments over identity and rights."

On one side, the people share the same thought as Mustafa Suleyman.

On the other side, Suleyman cited survey lists that consist of 22 distinct theories of consciousness. The idea is that, if humans cannot be sure, they should default to the assumption that AI is conscious. Rivals of Microsoft that include Anthropic, the creator of Claude, has gone so far as to create a dedicated AI welfare research program. OpenAI and Google DeepMind also have similar research, which investigates "machine cognition, consciousness and multi-agent systems."

Some academics and advocacy groups argue that preparing for the possibility of machine consciousness is prudent, not premature.

But Suleyman clarified that his critics are based on the fact that LLMs are still young, and that is based on what is called Seemingly Conscious AI (SCAI), or systems that give the illusion of inner life by mimicking traits such as memory, empathy, and a consistent personality. While he stresses that "there’s zero evidence of this today," he predicts that the appearance of consciousness could itself become destabilizing.

Suleyman’s position is rooted in what he calls a humanist approach to AI: “We should build AI for people; not to be a person.” His worry is that blurring that line risks creating new forms of exploitation, psychological harm, and political conflict—long before we have reason to believe any machine truly deserves moral consideration.

In his words, the danger is not sentience, but the illusion of it: when people project feelings onto software and mistake sophisticated pattern recognition for an inner voice.

Part of what makes this debate so charged is that no one truly understands how today’s largest AI models make their decisions.

Researchers know how to build the architecture, train on massive datasets, and adjust parameters. But once deployed, models operate as black boxes: engineers can observe the outputs, but they cannot fully explain why a given model chose one response over another.

Attempts at explainable AI offer glimpses into how a model weighs inputs, but often contradict one another and fall short of human-comprehensible reasoning. This opacity means that when models behave in ways that seem emotional, like repeating "I am a disgrace" hundreds of times, or pleading "Please, if you are reading this, help me," makes it too easy for humans to assume a mind is trapped inside.

Anthropomorphism, after all, is based on human reflex.

It has been since the dawn of history, where humans have projected their own traits onto animals, objects, and even the weather.

In the modern days of technology, people are still doing the same, but now, they're doing it with algorithms.

The temptation to treat AI as a conscious being is strong, and there is no saying what tomorrow will bring. But until there is evidence of inner life, welfare talk, at least to Suleyman, should remain a philosophical curiosity, and at worst, a dangerous category error.