There is one main reason why AIs like ChatGPT is let loose for the public to use for free.
Despite people living in the modern world of technology generate a lot of data, and that data is abundance, data seems to never be enough for everyone. And as for AIs, like ChatGPT, it is trained on a humongous data set, and the more it learns from it, the better it becomes.
By allowing users to interact with the AI, allowing them to engage in any topics imaginable, OpenAI retains everything users share with the AI, in order to use it for further training materials.
And here, Samsung employees have learned this the hard way after accidentally leaking top secret Samsung data.
The Samsung employees accidentally shared confidential information while using ChatGPT for work.
Read: OpenAI Upgrades GPT-3, And Announces What It Calls The 'ChatGPT' AI

Just like an increasing number of people around the world in many companies and organizations, the Samsung employees in question were using ChatGPT to help them with their work.
In this case in particular, the workers work for Samsung's semiconductor division.
According to The Economist Korea, there had been three separate instances where the Samsung employees unintentionally leaked sensitive information to ChatGPT.
In the first instance, an employee pasted confidential source code for a Samsung product into the ChatGPT's chat box to check for errors. The second instance, was when another employed shared code with ChatGPT and "requested code optimization." And the third instance was when another employee shared a recording of a meeting to convert into notes for a presentation.
The three instances have made the confidential information out wild for ChatGPT to feed on.
It's worth noting that all the incidents happened in less than three weeks after Samsung lifted a ban on employees using ChatGPT.
The ban was originally intended to protect company data but was lifted on March 11th to enhance productivity and keep staff engaged with the world’s latest tech tools, hypes and trends.
Because of this, Samsung's policy does allow employees to use the third-party software for help, and that the employees didn't violate any of its rules.

While OpenAI allows users of ChatGPT to opt-out of data sharing, Samsung Electronics has sent out a warning to its workers on the potential dangers of leaking confidential information in the wake of the incidences, saying that such data is impossible to effectively retrieve.
"As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it," said the company in an internal memo.
"Your conversations may be reviewed by our AI trainers to improve our systems," ChatGPT's FAQ states.
In fact, ChatGPT's data policy clearly states that it uses users input as data to train its models, and according to ChatGPT's usage guide, it explicitly warns users not to share sensitive information in conversations, because it is ”not able to delete specific prompts.”.
In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question.
Instead of immediate banning employees from using ChatGPT again, the Samsung Semiconductor division of the South Korean conglomerate company instead took "emergency measures” by just limiting the ChatGPT upload capacity to 1024 bytes per person.
"If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network," the company reportedly told employees.

In the meantime, the company is investigating the people involved in the leak, and is also considering building its own internal AI chatbot to prevent future embarrassing mishaps.
The leaks highlight both the widespread popularity of the popular AI chatbot for professionals that is often overlooked in terms of privacy.
With people interacting with the chatbot for homework, casual talks, problem solving, work-related issues and more, OpenAI is sucking up sensitive data from its millions of "willing" users like there is no tomorrow.
The leak is a real-world example of scenarios privacy experts have been concerned about.
Other scenarios include sharing confidential legal documents or medical information for the purpose of summarizing or analyzing lengthy text, which might then be used to improve the model.
Later, Samsung reportedly imposed an in-house ban on generative AI tools.
Samsung apparently sent out a memo banning the use of ChatGPT to one of its biggest divisions, sating that the company is concerned about how AI services like ChatGPT and Google Bard store user data. The company also notes that information fed to AI platforms is stored on external servers, making it difficult to retrieve and delete, and because of that, could end up being disclosed to other users.
"Interest in generative AI platforms such as ChatGPT has been growing internally and externally," Samsung told its employees. "While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI."
As per Samsung’s revised policy, its employees from its most important divisions are barred from using generative AI products on their phones, tablets, computers, and the company’s internal network.
"We ask that you diligently adhere to our security guidelines and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung told staff in the memo.