Intelligence Is 'A Fundamental Property Of Matter,' And 'Humans Aren't Special

Sam Altman
CEO of OpenAI, former president of Y Combinator

The world has been captivated, and that OpenAI is at the center of the AI hype. Since then, the company is on the fast track of AI development, like it or not.

After introducing ChatGPT, OpenAI managed to wow everyone who uses the AI product. Soon following that, there has been a lot of adoptions and real enthusiasm from users.

Sam Altman, the co-founder and CEO of OpenAI, also couldn't believe it himself, and just like everyone else, he's also astonished by the capability of the AI.

In an interview with the Economic Times in his business visit to India, Altman said that:

"I grew up implicitly thinking that intelligence was this, like really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter [...]

"Even if, like, humans aren't special in terms of intelligence, we are incredibly important."

Sam Altman, in India.
Sam Altman, in India.
"We are on an exponential curve, truly [...] we have an algo that can genuinely and truly learn [...] and it gets predictably better with scale."

"The rate of progress in coming years is going to be significant."

And concerning the hype, Altman cannot deny the fact that his team's creation has become one of the fastest-growing apps in the world, ever.

ChatGPT, powered by the sophistication of generative AI, will only get better in time, and that it will open possibilities never before encountered.

"In two generations, we can kind of adapt to any amount of labor market change and there are new jobs and they are usually better. That is going to happen here too. Some jobs are going to go away. There will be new better jobs that are difficult to imagine today."

And considering regulations many people have tried to make, including Altman himself, Altman said that smaller companies shouldn't worry.

Altman, who once said that he's afraid of ChatGPT, said that only those that are influential and big enough should be regulated, at least at this time.

"We have explicitly said there should be no regulation on smaller companies. The only regulation we have called for is on ourselves and people bigger."

And speaking about regulations, it's up to the world to democratize.

"I think the world can come together. This is an existential risk. If the governments cannot, we will ask the companies to do it."

But what worried him the most, is the chances that he and his team made wrong.

"What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT. That maybe there was something hard and complicated in there (the system) that we didn't understand and have now already kicked it off."

Altman visited India, following the moment when numerous tech leaders and government officials have raised their concerns about the pace of the development of AI platforms.

Back in March, a group of tech leaders wrote an open letter with the Future of Life Institute to warn that powerful AI systems should be developed only once there was confidence that their effects would be positive and the risks are manageable.

The letter called for a six-month pause in training of AI systems, like GPT-4, the brain behind ChatGPT.