Background

OpenAI Pulls ChatGPT Share Feature After Search Engines Start Indexing Private Queries

OpenAI ChatGPT, Share

When it comes to the web, everything begins and ends with links. They are the lifelines of the internet — threading pages, platforms, and people together in a vast digital tapestry.

More than just simple connectors, links are signals. They whisper relevance, shout authority, and quietly build reputations in the background of every click. If the links are not hidden behind some login page or paywall, links can also guide search engines through a website’s architecture, helping them index the content, and understand the hierarchy.

In other words, if something exists on the internet, it must have a link.

And OpenAI made one of the worst mistake when it comes to privacy.

When the company introduced a feature allowing users to share ChatGPT conversations via unique URLs, it may have underestimated the web’s unforgiving nature. These shareable links, are indexable links, meaning that sensitive queries or private thoughts ChatGPT users have made, were accessible to anyone with the URL.

And worse, many of these links are indexed by search engines, meaning that they became discoverable — searchable — immortalized.

ChatGPT Share Leak 1
ChatGPT Share Leak 2

The feature in question is called ChatGPT's 'Share' feature.

If users thought that they were sending private conversations to a bot by considering it a friend, they may have inadvertently made the conversations searchable on Google and Bing, as well as other search engines.

According to reports, thousands upon thousands of ChatGPT conversations shared via public links were crawled and indexed. What makes it troubling is that, many of these conversations contained deeply personal or sensitive content—covering trauma, mental health, work issues, and even confidential business discussions.

And making things worse, many of the conversations were leaked without users' consent.

The reason is because ChatGPT allowed users to generate a public shareable link—and if they checked the option “Make this chat discoverable,” their conversation could be indexed by search engines. While this was opt‑in only, many users clicked it without fully understanding the consequences.

A quick search using the operator site:chatgpt.com/share revealed over 4,500 conversations available in searches—though this likely underestimates the true number.

OpenAI responded swiftly after the backlash.

The company's Chief Information Security Officer Dane Stuckey announced on X that the feature was merely a “short-lived experiment,” and that it was removed within hours. The company also began disabling the feature, and working with search providers to de-index any exposed conversations as quickly as possible.

But here's the harsh truth: the internet doesn’t forget.

Once something is uploaded, indexed, and cached, it becomes part of the digital bloodstream — nearly impossible to fully erase. Even if OpenAI scrubs its own servers clean, cached versions will persist across search engines, archive tools, and third-party scrapers. And as long as those links are still being shared, they will continue to resurface like digital ghosts.

This incident is a stark reminder: in the online world, sharing is seldom truly private. Features launched with good intentions can unintentionally expose more than anticipated. OpenAI's aim was to highlight insightful ChatGPT exchanges. But in execution, it surfaced raw, intimate, and sometimes sensitive conversations — all without adequate warning or guardrails.

It’s a lesson in product design as much as it is a warning to users. Clarity, visibility, and friction aren’t obstacles to innovation — they’re essential safeguards. Especially when what’s being shared is generated by AI… but comes from deeply human places.

Published: 
02/08/2025