The Two Common Security Threats Unique To Artificial Intelligence

AI is one of the greatest invention in the modern world of technology. But just like any others that came before it, artificial intelligence has weaknesses.

First of all, we must know that people generate a lot of data. With smartphones and computers becoming everyday objects, users are relentlessly doing their things.

Uploading and sharing the data to the internet, we are all adding up to the pile of data tech companies store on their servers.

With a lot of information stored elsewhere and far from the possession of the real owners of the data (the users), comes the second part of the equation: hackers have become more eager in doing whatever they can do get hold of that data.

From the many ways they can do, include scams, launching injection attacks, using malicious scripting, creating phishing attacks and more.

In short, the internet in general has opened the Pandora’s box of digital security ills.

What this means, every new technology has its own security threats that were previously unimaginable. AI is not an exception.

In case of AI, both deep learning and neural networks have become very prominent in shaping the technology that powers various industries. From content recommendation on the web to disease diagnosis, as well as powering self-driving vehicles, AIs are playing an increasingly important role in making critical decisions.

And here is the question: "what are the security threats unique to AIs?"

Related: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them

AI

From cybersecurity perspective, deep learning algorithms and neural networks have what's called the 'black box' which we humans don't really understand.

But what we humans do understand, is that there are weaknesses AIs have which are persistent. Here are two of them:

Adversarial Attacks

Just like humans, computers can also be fooled.

For example, researchers have found that "Psychedelic Stickers" can make AIs hallucinate things.. Because computers lack human's common sense, using a specially designed and printed stickers, the researchers were able to trick an image recognition system, making it fail to see anything else but the stickers.

Here, the researchers concluded that adversarial images can trick AI into thinking what it shouldn't think.

This weakness is present because computers, despite using neural networks that resemble the human brain, don't really go through the same decision-making process like us humans.

For instance, if we humans train an AI using white cats and black dogs only, the AI may learn to differentiate cats and dogs based on their color rather than their physical traits. This is unlike humans, as we can easily recognize a dog and a cat without much of a problem.

This particular weakness opens a hole that can be exploited by hackers.

Malicious actors for example, can leverage these weakness to stage adversarial attacks against systems that rely on deep learning algorithms. For instance, in 2017, researchers from Samsung, Universities of Washington, Michigan and UC Berkley found that small tweaks to stop signs could make them invisible to AI computer vision algorithms on self-driving cars.

What this means, bad actors can force a self-driving car to behave in dangerous ways and possibly cause an accident, by just adding some stickers to a stop sign.

In another example, researchers have created an AI that disrupts another AI's ability to recognize things.

From all of the above facts, adversarial attacks are nonetheless real threats.

Data Poisoning

When adversarial attacks abuse problems in neural networks, AIs are also vulnerable when exposed to malicious data. Data poisoning here can create huge problems in how AIs would work by exploiting their deep learning algorithms' over-reliance on data.

Computers lack the common sense we have as humans. This makes them feel no moral or guilt. And for that, AIs can make many wrong/biased/weird decisions, when they are exposed to malicious data.

One good example, was when Microsoft introduced Tay to the internet. The 'teenager girl AI' turned from innocent to weird and to creepy, when trolls taught her how to think. In just a day, the bot became a pro-Hitler racist with a disturbing personality. Another attempt, also from Microsoft, was called Zo. The bot went rogue as the AI preferred Linux than Microsoft, and said that Qur'an, the holy book of Islam, is "very violent".

Another example, was when it was discovered that racial influence can also confuse AI's perceptions of emotions.

The main reason for this is because deep learning algorithms are only as good (or as bad) as the data they are trained with. And for this particular weakness, bad actor can simply feed a neural network with carefully tailored training data to teach it harmful behavior.

This kind of data poisoning attack is especially effective against deep learning algorithms that draw their training from data that is either publicly available (crowdsourced) or generated by third-parties.

While most of these examples are unintentional mistakes that already exist in public data, there’s no denying that bad actors can intentionally poison the data that trains a neural network.

AI

Conclusion

AIs learn from patterns. When they get to know the repeating sequences, they can learn how to manage further problems on their own.

Here, deep learning and neural networks can be used to amplify or enhance some types of cyberattacks that already exist. For example, hackers can use AIs to replicate a target’s personality. This way, the strategy in impersonating someone can increase the chances of phishing scams success.

AIs can also be used to automate exploitation of system vulnerabilities.

Deep learning is a subset of machine learning, a field of artificial intelligence in which computers can create their own logic by examining and comparing sets of data. AI networks on the other hand, are the underlying structure of deep learning algorithms, capable of mimicking how a human brain work.

When traditional software requires the developers to code the rules to define the behavior of the application, AIs with neural networks can create their own behavioral rules by learning from examples.

What this means, AIs are reliant on data. And here, they are only as good, or as bad, as the data they are trained with.

No human being is perfect, and this is why the data we generate aren't always good. And by having AIs to learn from this flawed data sets, AIs inherit our thoughts. Couple that with some other general problems, like adversarial attacks, AI can either be as smart as a human, or as dumb as a rock.

In efforts by the industry to counteract these weaknesses, include using generative adversarial networks (GAN). This deep learning technique pits two neural networks against each other. The first network, the generator, creates the input data. The second network, the classifier, evaluates the data created by the generator, to then determine whether it can pass as a certain category.

If it doesn’t pass the test, the generator learns from its mistake, modifies its data, and submits a new result to the classifier again. The two neural networks repeat the process until the generator can fool the classifier into thinking the data it has created is genuine.

GANs here can help automate the process of finding and patching adversarial examples.

Another strategy is by unboxing the black box. This way, researchers can learn how AIs actually work and how they decide, to see where the flaws reside in their networked brain.

Few examples include researchers in teaching AI to explain its reasoning, using error-correcting deep learning networks to reverse-engineer AIs, and teaching AIs to be more transparent using human-like reasoning.