AI Singularity Happens When 'Magic Intelligence' Is 'In The Sky'

Sam Altman
CEO of OpenAI, former president of Y Combinator

Humans are progressing, and they are progressing fast. This is because the more advanced societies have the right tools and prowess to progress faster than their ancestors that are less advanced.

In short, modern humans can accomplish much more in a year than our ancestors were capable of doing in a decade.

But technologies are evolving faster as time goes on.

Artificial Intelligence is the future of technology, and it's becoming one of the most important topic that has been debated and argued for more than many times.

As computers become smarter and able to decide things on their own, the society is steadily seeing a shift of where humans aren't anymore the most intelligent beings.

Read: Transitioning To AGI Is Perhaps The Most Important, And 'Scary' Project In Human History

Sam Altman.
Sam Altman.

The AI field was kind of boring and dull. The ripples the technology sent barely made beyond its realms, and captivate anyone else but researchers and geeks.

But thanks to generative AIs, the progress of advancing AIs has been boosted.

And when it comes to the AI race towards AI singularity, OpenAI, which rides the hype-fueled trend, is arguably one of those that knows the field better than pretty much anyone else in the industry.

In a hypothetical future, it has long been in pop culture, that AI will have a mind of its own, become conscious, and see humans as rivals.

Because of how well OpenAI has positioned itself in the business, the company may know more about whether or not AIs of the future will kill humanity.

According to Sam Altman, CEO and co-founder of OpenAI, he refers AGI as "magic intelligence in the sky."

This is explained by OpenAI, which suggests that "intelligence" is philosophical question, not a technical one.

So if one asks whether ChatGPT is intelligent, the answer depends on who is asked.

Read: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them

When someone is knowledgeable, it doesn't always mean that the person is smart. This goes down to the way the possessed knowledge is put into good use.

If the person knows a lot of things, but cannot relate the knowledge properly to solve real-life problems, that knowledge is useless.

In other words, It is not enough to simply know things, because one must also be able to use that knowledge effectively. And this also applies to AI.

While the gap between what a human can do and what AI is capable of is narrowing, to transform an AI from an ANI to AGI requires more than just data.

To advance further and become smarter, an AI should be able to reason, and this is not easy.

"There’s a long way to go, and a lot of compute to build out between here and AGI," said Altman. "Training expenses are just huge."

AGI is so close and yet so far.

Read: AGI Is When AI Requires Less Training Data, But More On 'Reasoning Capabilities'