Computers already excel humans in logics and mathematics with their ability to perform calculations at superhuman speeds.
They have better memories, and can also pull off a broadening range of high-value functions than humans.
They're not affected or influenced by emotions, feelings, wants, needs and other factors that often cloud the judgement and intelligence of mere mortals. They don't require sleep, and can be fed with a huge amount of data around the clock, and process them tirelessly.
Without considering the bugs, computers have the capacity that can be scaled, in a way that humans can't.
Computers that are enhanced with AI, are already affecting an increasing part of people's daily lives. With that in mind, the technology has the potential to help the world become a healthier place, efficient, and overall, a better planet. But when it comes to the power of computation and security, there are risks that need to be addressed.
Mariarosaria Taddeo, the Deputy Director of the Oxford Internet Institute’s Digital Ethics Lab, believes that people can mitigate the risks by building strong regulation on AI ethics.
Speaking to The Next Web, she said that:
One of her responsibilities at Oxford Internet Institute, is to create the guidance on the ethical design, development, and deployment of digital technologies.
She believes that ethics can help make AI a good and reliable technology that can also benefit businesses and future innovations.
"In turn, this can hinder innovation. Ethics, when embraced at the beginning of any design process, can help us to avoid this path, limit risk, and to make sure that we foster the ‘right’ innovation."
When researchers, organizations and governments started to understand the benefit of AI in the future, a growing number of those entities publish their own AI ethical guidelines.
But according to Taddeo, this should be a continuous process.
"This is because both technologies and societies change, and these changes may pose new ethical risks or new ethical questions that need to be addressed,” she said, adding that: "ethics — especially digital ethics — should be seen more as a practice than as a thing."
This is because one of the principles often included in ethical guidelines for AI, is "trustworthy AI".
Because trust should be earned and not given, AI should work its way up to earn humans' trust.
And to do that, the ethical guidelines should be an ongoing process that keeps going on and on.
Not to mention that as AI get smarter, the last thing people want is a rogue AI.
It's either them, or us.
Just like any programmable technology out there, AI can be attacked and manipulated, even without the user knowing anything about it. They can also have bugs and flaws. What this means, trust-and-forget is a very dangerous approach.
"So when considering how to use AI as a force of good, we should also ask the question as to what kind of societies we want to develop using this technology," explained Taddeo.
"This is even more true when considering digital technologies,” she said. “We shape AI and then AI returns to give shape to us. The question is what kind of shape we want to take and how we can use AI to take us there."