To most people, nuclear weapons and Artificial Intelligence are two separate and distinct things.
To them, nuclear weapons are considered the source of international political debates, the thing that countries use to showcase their military prowess over others. And for the latter, AIs, they are just those smart things inside computers and smartphones that can play games, recognize people's faces, and sort the works on internet services.
But Microsoft's co-founder Bill Gates has a different thought.
When Stanford unveiled its new Stanford Institute for Human-Centered Artificial Intelligence, Gates that was the keynote speaker, spoke in depth about AIs, as well as his fears and hopes for this technology.
“We had nuclear weapons and nuclear energy, and so far so good.”
Gates' fear is that AIs have substantial risk when we humans design powerful AI systems that have unintended behavior, or if they're deployed carelessly. Just how many experts in the field think, AIs might drive the human species to extinction.
"I am in the camp that is concerned about super intelligence," once said Gates in 2015. "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern."
"I agree with Elon Musk and some others on this and don't understand why some people are not concerned," Gates said.
in the real world, people are creating smarter AIs as they develop new and more ways to let machines teach themselves and mine the deep trove of data produced by the many information found in public, and private, as well as from connected devices.
While AIs have done more good than harm, Gates is trying to imagine the worst.
But if humans can develop and use AIs accordingly with care and responsibility, "It’s a chance to supercharge the social sciences, with the biggest being education itself."
Gates also thinks that AI can be used to identify promising drugs and speed up drug-development process, transforming global health.
"I do not believe without machine learning techniques we [would have ever been] able to take the dimensionality of this problem to find the solution."
In one example, he said, AI has been used to learn from 23andMe genetic data to discover that a shortage of the element selenium could be associated with premature births in Africa. This followed an initiative to help give 20,000 women with deficiency in the region.
"We expect to see about 15 percent reduction in prematurity, which for Africa as a whole would project out to be about 80,000 lives saved per year," Gates said.
Gates is also hoping that AIs can transform the field of education, by making it easier for students to have personalized instructor with AI assistant teachers.
AI technology is heavily developed, and many researchers from many organizations and companies are trying to improve this technology to become even better. There is no way of stopping them, as they are motivated by the increasing market demand human-based curiosity.
The potential risks however, is if the development of AIs aren't done carefully and responsibly.
With a lot of thought put into international coordination, inter-organizational coordination and policies, people should be able to ensure that AIs are deployed safely to benefit all of humanity.
Both Stanford and Gates have big roles in the fields of computing, and both are also trying to ensure that technological progress does good rather than harm: nuclear energy, not nuclear weapons.
And as AI progresses, creating AIs and controlling them should be a priority.