The AI field was dull and quiet. While researchers and developers work day and night, most of the buzz the technology created happened mostly within its own field, and rarely reach far beyond its own audience and realm,
That, until OpenAI announced ChatGPT, which is an AI chatbot tool that can a wide range of tasks, including writing poetry, technical papers, novels, and essays.
What follows is an arms race, where tech companies, large and small, compete towards creating the best AI, with a goal of achieving supremacy.
That has been the hype.
But what's less talked about, is how the ever-advancing AI technology can affect humanity in a bad way.
Dario Amodei, the CEO of Anthropic, spearheaded Claude AI with his team of former OpenAI employees, and he's kind of worried about the timeline towards AI superintelligence.
In a lengthy 5.5 hour conversation with Lex Friedman, Amodei said that:
"Let’s say I’m someone who I have a PhD in this field, I have a well-paying job. There’s so much to lose. Even assuming I’m completely evil, which most people are not, why would such a person risk their life, risk their legacy, their reputation to do something truly, truly evil? If we had a lot more people like that, the world would be a much more dangerous place. And so my worry is that by being a much more intelligent agent, AI could break that correlation."
During the meet, where the two talked about scaling, AI safety, regulation, and a lot of super technical details about the present and future of AI and humanity, Amodei said that smart people tend to have what they want to have.
And because of that, in most cases, these people will not put humanity at risk because they don't want to risk what they have or what they have achieved.
AI can change this as the technology gets smarter.
If, for example, AI with its autonomous capabilities are let loose on their own, with lesser restriction they've had in the past, will they do what they're designed to do?
" [...] they’re on a long enough leash. Are they doing what we really want them to do?" Amodei argues.
When AI gets smarter and smarter, with intelligence that surpasses humans, that is the time humanity will have no clue into what they're doing, or knowing what AI wants.
Read: Artificial General Intelligence, And How Necessary Controls Can Help Us Prepare For Their Arrival
Fortunately, humans aren't dumb.
"I don’t think there’s any big thing we’re missing. I just think we need to get better at controlling these models. And so these are the two risks I’m worried about. And our responsible scaling plan, which I’ll recognize is a very long-winded answer to your question."
AI as the cause of catastrophe has long been portrayed through science-fiction.
One famous example is through the Terminator franchise that depicts AI that sees humanity as a threat decided to create doomsday and enslave humans. Director James Cameron is scared of it, and he said that he had warned people since 1984 but "you didn't listen."
However, a more realistic AI in the modern (current) world is more about having a technology that controls information in a way that humans could never achieve alone. The fear is when it's given more privilege beyond its original comprehending.
Amodei thoughts about seeing AI has too powerful for humanity too handle makes him look like a pessimist, or a "doomer."
But in his own post on his website titled "Machines of Loving Grace, Amodei said that he's actually the other way around.
Here's the links for my conversation with @DarioAmodei and Anthropic team:
YouTube: https://t.co/5HSCwyUtsT
Transcript: https://t.co/KcYEqResol
Podcast: https://t.co/uxqXcfWUje pic.twitter.com/2cofZtAzuw— Lex Fridman (@lexfridman) November 11, 2024
His method include:
- Maximize leverage. "The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood."
- Avoid perception of propaganda. "AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides."
- Avoid grandiosity. "I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation."
- Avoid “sci-fi” baggage. "Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively 'sci-fi' tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes)."