The world has been captivated, and that OpenAI is at the center of the AI hype. Since then, the company is on the fast track of AI development, like it or not.
After introducing ChatGPT, OpenAI managed to wow everyone who uses the AI product. Soon following that, there has been a lot of adoptions and real enthusiasm from users.
Sam Altman, the co-founder and CEO of OpenAI, also couldn't believe it himself, and just like everyone else, he's also astonished by the capability of the AI.
In an interview with the Economic Times in his business visit to India, Altman said that:
"Even if, like, humans aren't special in terms of intelligence, we are incredibly important."
"The rate of progress in coming years is going to be significant."
And concerning the hype, Altman cannot deny the fact that his team's creation has become one of the fastest-growing apps in the world, ever.
ChatGPT, powered by the sophistication of generative AI, will only get better in time, and that it will open possibilities never before encountered.
And considering regulations many people have tried to make, including Altman himself, Altman said that smaller companies shouldn't worry.
Altman, who once said that he's afraid of ChatGPT, said that only those that are influential and big enough should be regulated, at least at this time.
And speaking about regulations, it's up to the world to democratize.
great conversation with @narendramodi discussing india's incredible tech ecosystem and how the country can benefit from ai.
really enjoyed all my meetings with people in the @PMOIndia. pic.twitter.com/EzxVD0UMDM— Sam Altman (@sama) June 9, 2023
But what worried him the most, is the chances that he and his team made wrong.
Altman visited India, following the moment when numerous tech leaders and government officials have raised their concerns about the pace of the development of AI platforms.
Back in March, a group of tech leaders wrote an open letter with the Future of Life Institute to warn that powerful AI systems should be developed only once there was confidence that their effects would be positive and the risks are manageable.
The letter called for a six-month pause in training of AI systems, like GPT-4, the brain behind ChatGPT.