Just like anything else, something that can create a sound will create attention. On the web, if that thing can generate "noise," will have a chance to create a fuss, and spearhead immense virality.
The AI field was dull and quiet.
While researchers and developers work day and night, most of the buzz the technology created happened mostly within its own field, and rarely reach far beyond its own audience and realm,
That, until OpenAI announced ChatGPT, which is an AI chatbot tool that can a wide range of tasks, including writing poetry, technical papers, novels, and essays.
Sooner than later, he internet was quickly captivated. and even strike fear into Google.
Sam Altman, the CEO of OpenAI, has something to say about this, and he's kind of worried.
When ChatGPT took the world by storm, Sam Altman didn't expect the product to be this wildly popular.
After being considered the fastest-growing consumer app in history, and how the popularity made its servers populated with frequent downtimes, Altman stressed that the world may not be that far off from potentially scary AI.
And he suggested that ChatGPT is may be just the start towards that scary future.
For all this time, humanity has developed AI to only an ANI level, meaning that the AI products could only do certain thing at a certain time, and is far from human level.
But with ChatGPT, the path towards AGI is becoming a bit visible.
At this time, companies are using OpenAI's ChatGPT to write codes, copywriting and content creation, customer support and preparing meeting summaries.
With more users using it, and more data it processes, ChatGPT is only going to get smarter. The AI has shown an ability to think out of the box and speak things through "emotions".
It's only a matter of time AGI could be certain.
In a blog post, Altman said that the path towards AGI comes with serious risk of misuse, drastic accidents, and societal disruption.
"At some point, it may be important to get independent review before starting to train future systems, and "for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."
"We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it's important that major world governments have insight about training runs above a certain scale."
Read: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them
According to Altman, OpenAI wants to successfully navigate massive risks.
"We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. We hope to contribute to the world an AGI aligned with such flourishing."
Altman is worried because AGI is maybe able to "accelerate its own progress" and that it "could cause major changes to happen surprisingly quickly."
Since the development of AI is certainly not going to stop and is only going to accelerate, he thinks that "a slower takeoff is easier to make safe, and coordination among AGI efforts to slow down at critical junctures will likely be important."
Because of this, Altman said that "the future of humanity should be determined by humanity," and to do that, "it’s important to share information about progress with the public."
"The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time."