
ChatGPT is evolving into a more expressive, human-like companion. And this should appeal those who desire it.
Since its remarkable debut, ChatGPT has fascinated millions and quickly became the catalyst for what many now call the arms race of large language models (LLMs). To keep pace with this escalating demand, and the growing imagination of its users, CEO Sam Altman has announced a daring new direction for what ChatGPT is meant to be.
According to Altman, verified adult users will soon be able to engage in erotic or intimate conversations with the AI.
He explained that the earlier, stricter boundaries were necessary to protect vulnerable users and to address mental-health concerns, but that OpenAI has since developed better safeguards to minimize those risks.
Now, he says, the company can "safely relax the restrictions in most cases."
In essence, Altman envisions a ChatGPT that feels less restrained, one that can explore emotional nuance, playfulness, and even desire, while still operating within clearly defined boundaries.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have…— Sam Altman (@sama) October 14, 2025
In a post on X, Sam Altman said that:
Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).
In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults."
Initially, ChatGPT’s safety measures limited its ability to respond with emotional intimacy, romance, or explicit content. Those limits were designed to avoid manipulative or harmful attachments. A lot of users however, were frustrated because they felt the AI was too sterile.
Altman wants to change this. He said that shall be able to choose more "human-like" personalities, request emoji usage, or ask for the AI to behave more like a friendly confidant.
This feature is planned to be off by default.
Users must opt in first, and verify themselves as adult.
This is because the most controversial part of the announcement is the plan to roll out age-gated erotic content for adult users.
Once OpenAI fully implements its age-verification system, which includes behavior-based age prediction and optional identity proof, adults who opt in shall be able to have the option to unlock erotic roleplay or discussions.
Altman emphasizes that this content is not going to be forced on users. What this means, it's not only off by default, but also must be explicitly requested.
Why do age-gates always have to lead to erotica? Like, I just want to be able to be treated like an adult and not a toddler, that doesn't mean I want perv-mode activated.
— cate bligh (@catebligh) October 14, 2025
Despite the exciting prospects for more expressive and customizable AI interactions, the move raises serious questions. Critics worry about the potential for misuse, especially by minors who might bypass age checks.
LLM AIs can hallucinate, and that in the past, a lot of people have found ways to jailbreak and trick AIs to do what they're not supposed to do. These methods effectively bypass filters and preventive measures the developers have put in place.
And in this case, savvy minors and teens may be able to exploit loopholes and expose themselves to inappropriate content.
There are also lingering concerns about the broader mental health implications. Some have pointed to past incidents in which users developed unhealthy emotional bonds with the AI or were susceptible to delusional thinking nurtured by sycophantic responses.
Altman and OpenAI assert that with their new moderation tools and an expert advisory council on well-being, such risks are now better managed. But still, skeptics caution that the evidence is still limited.
For sure; we want that too.
Almost all users can use ChatGPT. however they'd like without negative effects; for a very small percentage of users in mentally fragile states there can be serious problems.
0.1% of a billion users is still a million people.
We needed (and will…— Sam Altman (@sama) October 14, 2025
In essence, OpenAI is taking a calculated leap of faith.
Grok, for instance, has already leaned into a more daring personality: witty, blunt, and openly adult-friendly. Meanwhile, countless smaller and independent AI systems, far from corporate oversight, have long embraced the freedom to generate romantic or even erotic writing and imagery.
So perhaps, ChatGPT’s new direction is less of a surprise and more of an evolution.
After years of positioning itself as an all-purpose, all-knowing assistant, OpenAI now seems eager to give its AI a touch of charm. And that is by making it friendlier, more expressive, and unmistakably more human.
Altman insists that ChatGPT has finally reached a point where it can balance safety with personality, which means that the AI that feel more "alive," yet still aware of its boundaries.