If AI Goes Wrong, 'It Can Go Quite Wrong' And Cause 'Significant Harm To The World'

Sam Altman
CEO of OpenAI, former president of Y Combinator

To some people in this world, AI may seem to be an unknown topic. But to the rest of the population that have their eyes on the fast-paced technology advancement, AI is like a 'god', and that is because for many reasons.

For example, the technology is able to accomplish many feats that were previously impossible for even the smartest human beings, and that the technology can accomplish the feats in fraction amount of time.

With the rapid development of AI technology and everything that comes and goes with it, there should be a way to government this particular technology.

And Sam Altman, the CEO of OpenAI, suggests that a regulator with the power to grant and take away licenses from companies developing powerful AIs is needed.

Altman, the man behind the powerful GPT-4 AI and ChatGPT, told a U.S. Senate hearing that there are clear dangers if strict safety standards are not in place for even more powerful AIs likely to be created in the close future.

Sam Altman, the CEO of OpenAI
Sam Altman.

He said that:

"I think we also need rules, guidelines, on what’s expected in terms of disclosure from a company providing a model [...] I am nervous about it."

"My worst fears are that we cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that we want to work with the government to prevent that from happening."

The issue Altman was highlighting, is how generative AIs can "hallucinate," in which the models can make up fake facts and make them sound authoritative.

During the internet boom, in the U.S., social media companies such as Facebook and Twitter have benefited from an exemption known as Section 230, which shields them from litigation based on messages posted by their users.

But according to Altman, chatbots like OpenAI's ChatGPT must not benefit from such legal protection, adding that it would not be "the right framework” for AI."

As the leader behind the company that created the hype of generative AIs, and the very company that sent even the largest tech giants in the world to scramble and create their own solutions to compete, Altman admitted that ChatGPT is already a threat on its own, especially in some industries, such as customer service, data analysis, law and education.

"Like with all technological revolutions, I expect there to be a significant impact on jobs […] and I think it will require a partnership between the industry and government, but mostly action by government, to figure out how we want to mitigate that."

His concerns about the potential impact of the technology, also include huge risks of misinformation.

This is because one of the "areas of greatest concern" is the "ability of these models to manipulate, to persuade, to provide one-on-one interactive disinformation" to users.

As a solution, Altman suggested a UN-style global safety framework similar to that for nuclear research, but said that U.S. leadership would be "critical."

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," he said.

"I would create a set of safety standards, specific tests that a model has to pass before it can be deployed into the world. We would also require independent audits by experts who can say the model is or isn’t in compliance with the stated safety thresholds."

Altman also said that AIs must not be able to replicate themselves amid fears they could "exfiltrate into the wild" outside of human control.

When the 1985-born executive gave his testimony before Congress for the first time, he said generative AIs, which are capable of creating text and images in seconds based on user prompts has already surpassed humans in many areas, are facing a "printing press moment" as they are rapidly adopted and used for everyday tasks.

Because of this, "regulatory intervention by governments will be critical" to "mitigate the risks of increasingly powerful models" as these AI products grow even more intelligent, and more in importance.

At this time, Altman may not be as famous as Elon Musk, or some tech executives who came before him. But Altman is already a prominent figure in Silicon Valley.

After co-founding OpenAI, Altman led the company to create various products by experimenting on various AI models.

And ChatGPT, was the thing that disrupted the whole tech ecosystem when people realized its potential. ChatGPT quickly became the fastest-growing app in history.

Microsoft, for example, launched a version of chatbot for its Bing search engine that utilizes OpenAI's technologies. Google has what it calls Bard.

"This is a remarkable time to be working on artificial intelligence but as this technology advances, we understand that people are anxious about how it could change the way we live," Altman said, who once expressed that he's afraid of smarter AIs. "We are too."