AI was once a phrase reserved for researchers, hobbyists, and the pages and screens of science fiction. That is no longer the case.
Today, AI models can write, speak, and even interpret video with such fluency that many mistake them for human. Since OpenAI launched ChatGPT and after others followed suit in an arms race where companies race towards making increasingly powerful and capable models, the technology is practically now in everyone's hands.
These models are accessible through the web and on mobile phones, readily available to answer any question imaginable (and unimaginable), and respond with unmatched patience.
Yet, no matter how sophisticated these systems become, or how natural their conversations may feel, this should not be confused with true consciousness.
That, is according to CEO of Microsoft AI, Mustafa Suleyman.

In a blog post, Suleyman said that:
In his argument, Suleyman warns that entertaining the idea of "AI welfare" risks feeding illusions that machines are more than tools.
By suggesting that models might be conscious, or deserving of rights, researchers could deepen psychological dependencies already forming between people and chatbots, he said.
AI's value is precisely because it's something so different from humans. Never tired, infinitely patient, able to process more data than a human mind ever could. This is what benefits humanity. Not an AI that claims to feel shame, jealousy, fear + so on.
https://t.co/WsEcvNQgoC pic.twitter.com/DA9lGchjXa— Mustafa Suleyman (@mustafasuleyman) August 21, 2025
Consciousness can be described as the state or quality of being aware of oneself and the surrounding environment. At its core, it is the subjective experience of existence. It's the fact that there is a "what it feels like" to be you.
Scholars and philosophers often break it down into a few overlapping dimensions. They include phenomenal consciousness, where a subject inhibits raw, subjective experience of sensations, emotions, and perceptions; access consciousness, which can be described as the ability access and use information in reasoning, decision-making, and guiding behavior; self-consciousness, which is when a subject is awareness not just of the world, but of oneself as an individual distinct from others; and reflective consciousness, in which the subject has the capacity to think about their own thoughts.
In short, consciousness is the blend of awareness, experience, and self-reflection.
So far, civilization has decided that humans, aka. homo sapiens, have special rights and privileges. Those in the animal kingdom also have some rights and protections. But consciousness is a different thing, and it isn't coterminous with these rights.
But as AIs show more and more human-like traits and intelligence, a growing number of researchers began asking the question that has only belonged to science-fiction: if ever AI models develop a subjective experience to living beings, what rights should they have?
Seemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity. It's not - but replicates markers of consciousness so convincingly it seems indistinguishable from you + I claiming we're conscious. It can already be built with today's tech. And it's dangerous. 2/
— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
This so-called "AI welfare" is already dividing technologists, ethicists, and corporate leaders, a new axis of division within society over AI rights in a "world already roiling with polarized arguments over identity and rights."
On one side, the people share the same thought as Mustafa Suleyman.
On the other side, Suleyman cited survey lists that consist of 22 distinct theories of consciousness. The idea is that, if humans cannot be sure, they should default to the assumption that AI is conscious. Rivals of Microsoft that include Anthropic, the creator of Claude, has gone so far as to create a dedicated AI welfare research program. OpenAI and Google DeepMind also have similar research, which investigates "machine cognition, consciousness and multi-agent systems."
Consciousness is a foundation of human rights, moral and legal. Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals + nature on planet Earth. AI consciousness is a short + slippery slope to rights, welfare, citizenship. 4/
— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
Some academics and advocacy groups argue that preparing for the possibility of machine consciousness is prudent, not premature.
But Suleyman clarified that his critics are based on the fact that LLMs are still young, and that is based on what is called Seemingly Conscious AI (SCAI), or systems that give the illusion of inner life by mimicking traits such as memory, empathy, and a consistent personality. While he stresses that "there’s zero evidence of this today," he predicts that the appearance of consciousness could itself become destabilizing.
Suleyman’s position is rooted in what he calls a humanist approach to AI: “We should build AI for people; not to be a person.” His worry is that blurring that line risks creating new forms of exploitation, psychological harm, and political conflict—long before we have reason to believe any machine truly deserves moral consideration.
In his words, the danger is not sentience, but the illusion of it: when people project feelings onto software and mistake sophisticated pattern recognition for an inner voice.
So what now?
Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn't either.
As an industry, we need to share interventions, limitations, guardrails that prevent the perception of consciousness, or undo the perception if a user develops it. 6/— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
Part of what makes this debate so charged is that no one truly understands how today’s largest AI models make their decisions.
Researchers know how to build the architecture, train on massive datasets, and adjust parameters. But once deployed, models operate as black boxes: engineers can observe the outputs, but they cannot fully explain why a given model chose one response over another.
Attempts at explainable AI offer glimpses into how a model weighs inputs, but often contradict one another and fall short of human-comprehensible reasoning. This opacity means that when models behave in ways that seem emotional, like repeating "I am a disgrace" hundreds of times, or pleading "Please, if you are reading this, help me," makes it too easy for humans to assume a mind is trapped inside.
I know to some, this discussion might feel more sci fi than reality. To others it may seem over-alarmist. I might not get all this right. It’s highly speculative after all. Who knows how things will change, and when they do, I’ll be very open to shifting my opinion. But... 8/
— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
Anthropomorphism, after all, is based on human reflex.
It has been since the dawn of history, where humans have projected their own traits onto animals, objects, and even the weather.
In the modern days of technology, people are still doing the same, but now, they're doing it with algorithms.
The temptation to treat AI as a conscious being is strong, and there is no saying what tomorrow will bring. But until there is evidence of inner life, welfare talk, at least to Suleyman, should remain a philosophical curiosity, and at worst, a dangerous category error.
SCAI deserves our immediate attention. AI development accelerates by the month, week, day. I write this to instill a sense of urgency and open up the conversation as soon as possible. So let's start - weigh in in the comments.
Full essay: https://t.co/WsEcvNPIz4— Mustafa Suleyman (@mustafasuleyman) August 19, 2025