Artificial Intelligence Should Be Regulated, And Governments Need To Act

Sundar Pichai
CEO of Alphabet Inc. and its subsidiary Google LLC

With AI usage becoming more widespread, there should be a way to control how people use the technology by regulating it.

Google is one of the tech companies that focus on developing AI, and its CEO, Sundar Pichai, outlined his opinion in the Financial Times article that deepfakes and “repressive uses of facial recognition” are of great concern. But despite the company's willingness to curb the fakeries, Google and others can't just build a technology without regulation, saying that:

"Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone."

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone."

This is why Pichai suggested that the government needs to regulate AI going forward.

Sundar Pichai

It was back in 2018 when Google published its AI principles which outline its moral value in the field.

The guidelines were made after Google faced severe backlash for chasing a contract to develop Project Maven, the AI tools for the U.S. military.

These principles however, only detailed a little about taking responsibilities about AIs and its models, or algorithms the company develops. This is no coincidence since deepfakes have no vertical for Google or Alphabet to monetize. Over-regulating AI will hurt Google and others.

But still, the company has worked on tools to test AI for fairness.

Pichai added that:

"We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together."

"AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do."

In short, Pichai noted that AI has a lot of uses. But when considering the bad side of AIs, no single industry or corporation can tackle this issue on his own. This is why governments should come and help.

Governments are meant to take care of people. While Pichai’s calls for government regulations over AI, it’s worth noting that government agencies can take quite a long time to formulate rules and regulations.

When considering the rapid pace of AI development and adoption, it maybe too late to field certain issues.

AI-generated fake news and deepfakes are just two out of the many. Things that are worse include, and not limited to: mass surveillance without people's consent that cross the borders of privacy, and tech that can be weaponized or kill people.

If governments can create independent watchdogs consisting of experts in the field to keep a close look over advancements in AI, the technology can be carefully controlled. This in turn will allow governments to develop rules for how AI can be used both in the private sector and by the world's governments.

The bad side of governments in having the power to control AI is that, the more powerful the country would make it even stronger.

An aggressive nation for example, would have even more power to be aggressive and launch their offenses. If the control goes to countries that break their own laws, they can develop the same types of smart tools and weaponry as countries without similar laws.

But since AI is a developing field with many questions remaining unanswered, this makes the technology something to fear. Humans always fear something they don't know. So until the time we humans can really understand artificial intelligence and how to prevent the worse case scenario, regulations and rules should be present.

So nevertheless, the sooner the government can act, the better.