The Reasons Why AI Won't Solve Cybersecurity Issues

The world is evolving, and it brings technologies to the next step.

AI is regarded as the answer to many problems. With computers becoming smarter and can decide on its own, their automation can offload many of humans' daunting tasks, including those in cybersecurity.

With the combination of technologies that have matured simultaneously with computing power, storage, cloud networks, math, IT and so forth. Some even said that the marriage between cybersecurity and artificial intelligence is a match made in heaven.

But it's actually far from that.

AI

To be clear, AI definitely has a few clear advantages for cybersecurity. For example, malware that self-modifies just like a biological virus, would be next to impossible to develop a quick response without using AI.

Machine intelligence that is properly trained is also useful in other security sectors, like banks and credit card providers that are always on the move to improve fraud detection and prevention.

But in reality, AI is just like any technology that comes before and after it. AI has limits.

AI To Fool AI

Back to the context of malware that self-modifies. If AI is used to better detect threats, cybersecurity must realize that out there, attackers can also use AI to improvise their attacks.

When a company uses AI to detect threats with better accuracy and prevent breach from happening, attackers can also use AI to create smarter malware than can trick that AI-powered security and avoid detection.

Basically here, while AI can indeed be used to enhance security, but again, AI can also be trained to trick another AI.

What's more, if companies are depending on automated systems powered by AI, they may put a bit more trust than necessary. If this is the case, once attackers make pass the company's AI security, it would be easy for them to do whatever they want unnoticed.

Once attackers are in and the malware detected, it would be long late as the system has been compromised and damage might have been done.

Power Issues

With devices becoming more powerful, it doesn't mean that all devices are created equal. While computers, servers, mobile devices can be created with high specifications. Internet of Things devices (IoT) are generally low powered.

If attackers can deploy their method of attack at this level, chances are the AI won't be able to help.

The reason is because AI needs a lot of computing power, memory and data. It requires all of these in order to operate and run properly. Without configuration and cloud support, AI-based security cannot be done on these IoT devices.

If those devices use cloud infrastructure for security, the devices must first send data to the cloud for processing, before the AI can respond. By then, it's already too late. At its best, AI might be helpful in detecting something wrong before someone lost control over the device.

Security

Doesn't Know The Unknown

AI has shown a lot of promise in a controlled environment, in a place where the system is built, trained and operated. But when it's put on the loose to the wilderness of the world, a place where there is a lot of criteria not yet known by the AI and not yet included by its human creators, things will go wrong.

There is always the situation where an employee of a company having access to corporate control is logging in from an unsecured internet connection, and got hacked because of that.

What's more to consider is that, actual computations are usually divided into abstraction of layers, where each layer has its own logic.

When a person wants to log in into their account, they need to type something or do something for authorization. When a key on the keyboard is pressed, or an authentication sensor is triggered, they send signals to the controller. Then the device executes codes, and other components go to work, sending data to its data center to return data though communication protocols.

Here, things are complicated. Computation is like a cake with many layers, and understanding how exactly all those layers work is not an easy task. The more layer the cake has, the more sophisticated the system, the more flaw it has.

And since employees are after all humans, they can fall for tricks using social engineering hacks.

In this example, if that employee is already using two-factor authentication, but still opening email attachments or clicking on unknown links without any hesitation, nothing would guarantee the safety of the data.

The chances of corporate hacks are higher in companies with BYOD (Bring Your Own Device) policy.

There is no effective way to deploy machine learning in situations like these. This is why companies find difficulties when introducing AI to cloud-based systems that offer only apps and no corporate access control or logs.

Conclusion

AI indeed gives companies new capabilities. It changes security from preventive to predictive. However, AI does the exact same thing for those trying to exploit cyber-vulnerabilities. With AI, hackers not only get smart automation in their disposal, but also a powerful way of gathering information about both companies and specific employees.

What cybersecurity know is that AI-powered security solutions have great qualities. But companies need to have a mature cybersecurity setup to take advantage of them.

AI can be used to detect malware or an attacker, but it's difficult to prevent malware from being distributed through company systems, because the sophistication of computation can have some loose ends. There is also the chances of late updates, unpatched OS flaws and many more.

And let's not forget that there are age-old problems that humans always have: the lack of control, lack of monitoring, and lack of understanding of potential threats.

So here, or at least when AI is still young, AI does help in cybersecurity, but it's not a game changer.