Deepfake, One Of The Most Serious AI Crime Threats, Researchers Said

08/08/2020

Fake images is one thing.

While they can certainly fool many people, the modern culture has understood that images can be manipulated very easily. Photoshop for example, the Adobe product has become one of the most common software installed in many modern PCs.

This is why an increasing number of people is not going to just believe an image they see as something that has not been edited in one way or another.

But the case is different when people are given an audio or a video content.

There are a lot of people that tend to believe that the audio they hear and the video they see are the real things, unless they come from movie footage. But in fact, audio and video manipulation are becoming increasingly common. And Artificial Intelligence is the one to blame.

This is why experts have ranked that the use of manipulative AI like deepfakes, as the most worrying and concerning AI crime in terms of its potential applications for crime or terrorism.

Deepfake

According to a report from University College London (UCL);

"[...] fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm."

Based on a study that was published in Crime Science and funded by the Dawes Centre for Future Crime at UCL, the researchers have identified 20 ways AI could be used to facilitate crime over the next 15 years.

They then asked 31 AI experts to rank them by risk, based on their potential for harm, the money they could make, their ease of use, and how hard they are to stop.

And here, the authors said that deepfake-like technology is judged to be of high concern.

The technology has been a headline-grabbing topic after the term emerged from Reddit in 2017. And since then, it has become more widely used, as the underlying technology improves.

According to a senior author Professor Lewis Griffin at UCL Computer Science:

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Deepfakes are regarded as AI-generated videos or real people doing and saying fictional things.

The technology earns UCL's top spot for two major reasons:

First, deepfakes can be used in a variety of crimes, from discrediting public figures to impersonating people. Second, they’re hard to identify and prevent. Automated detection methods are still not reliable and deepfakes are also getting better at fooling human eyes.

In addition, the researchers fear that the technology can also be used to make people distrust audio and video evidence, which is considered a societal harm.

Deepfake is mentioned alongside other high-concern threats that involve AI, like using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, and harvesting online information for the purposes of large-scale blackmail.

Deepfake

The report also mentioned some medium-concern crimes, that included the sale of items and services fraudulently labelled as “AI”, such as security screening and targeted advertising. These would be easy to achieve, with potentially large profits.

Crimes of low concern included burglar bots, which involve small robots that are used to gain entry into properties through access points such as letterboxes or cat flaps. These kind of threats were judged to be easy to defeat.

First author Dr Matthew Caldwell at UCL Computer Science) said:

“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity."

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, which funded the study, said:

“We live in an ever changing world which creates new opportunities – good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur. This report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.”