How AI Poses Malicious Threats When It Turns Its Back On Humans

There is two side on a coin. To some, Artificial Intelligence (AI) can dramatically improve our lives in the future. But to others, it's paving the end of humanity.

The debate about AI warns that malicious uses of AI is a "clear and present danger" to society. While we're not there yet as the future is yet to be written, the discussion about the technology has gathered some of the greatest minds in the world pitted at opposite ends of the spectrum.

While AI, or just about any technologies out there, has undoubted benefits, there are also drawbacks.

For AI, people see some potential problems ahead. Why? Because AI has become integral in many parts of other technologies we use everyday. Up to some points, they have become smarter to even surpass human intelligence in certain things.

Related: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them

Recognizing the possible problems, security and technology suggests that unless preparations are made against the malicious use of the technology, cybercrime using AI will rapidly increase in years to come.

One of the most important of these, is that AI can dramatically lower the cost and time of certain attacks by allowing bad actors to automate tasks that previously required human labor. Here, AI adds a new dimension to existing threats.

With AI being capable to impersonate humans, or impersonate figures or faking events to creating fake news, they can be used by malicious actors to manipulate public opinions about something. For example, AI bots can be used to alter the news, social media and elections, as well as the hijacking drones and autonomous vehicles.

Engaging chatbots can be created to phish and trick people in providing sensitive information.

We might also accidentally create a super-intelligent AI and forget to program it with a conscience; or that AI would be the judges that put innocent people in jail because the racist biases on their training data.

So here, we live in a world where AI can be misused, threatens digital, physical and political security by allowing large-scale, finely targeted, highly efficient attacks.

There should be ways where we can design software and hardware to make them less hackable. There should be laws and regulations that can work with this.

Policy makers, for example, should work with researchers and companies to understand the possible impact of this technology in the future. Understanding that there is indeed the potential of the technology being used for malicious deeds. Developers should also be more proactive and mindful of how it could be misused.

There is no doubt that AI can be a good friend, as we have seen a lot of times helping humans in their work; aiding in researches and development of things.

But on the other hand, there are chances that our AI creation can turn its back on us, if it outsmart us, without any regulations or us being ready.

Related: The Greatest Risk We Face As A Civilization, Is Artificial Intelligence