Adobe Uses AI To Spot Photoshopped Images Easier And Faster

Adobe Photoshop is one mighty tool for editing images and photos. And here, it can also be used to manipulate facts so much that it becomes worrying.

And with AI, experts around the world are getting even more worried because it allows almost anyone to edit videos and images. The problem escalates with the internet and mobile phones reaching more people, and social media that empowers viral posts, making shocking contents to quickly anger or disturb users without fact-checking.

Since some of the editing tools are created by Adobe, the multinational computer software company is working on a solution.

By researching machine learning, Adobe has developed an AI that can be used to automatically spot edited and fake pictures.

In its research paper, Adobe shows how machine learning can be used to identify 3 common types of image manipulation:

  1. Splicing, which two parts of different images are combined to create one image.
  2. Copy-move, or also called splicing, is where objects within an image are copied and pasted.
  3. Removal, where an object is edited so it can be removed altogether.
Adobe
Examples of tampered images showing manipulations of splicing, copy-move and removal

To spot these kinds of tampering, human digital forensics usually seek clues that are hidden within layers of the image. Usually, when a picture has been edited, there are some digital artifacts left behind. For example: inconsistencies in variations, colors and brightness. They are often created and left by image sensors, or image noise.

These are like stains that keen eyes can spot. But still, humans have flaws.

"Even with careful inspection, humans find it difficult to recognize the tampered regions," said Adobe senior research scientist Vlad Morariu. "Our method not only detects tampering artifacts but also distinguishes between various tampering techniques."

Using machine learning, Adobe wants to make the process of spotting tampered images better than humans, and also a lot faster. The AI was taught with a large dataset of edited images, to learn how to spot common patterns found on tampered images. According to the company, the AI scored higher in some tests if compared to other similar systems.

"Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network to recognize image manipulation," Morarium said

Adobe
Combining the features from the RGB image with the noise features, RGB-N produces the correct classification for different tampering techniques

The company showcased this project at the CVPR computer vision conference, by demonstrating how digital forensics done by humans can be automated by machines in much less time. Initially, the research paper doesn't represent a breakthrough or anything, and isn't available as a commercial product.

But with the project, Adobe wants to play a role in "developing technology that helps monitor and verify authenticity of digital media."

While the research has no direct application is spotting Deepfake videos, it's an advancement to those who want to spot digital fakes. In the present and future days where facts and fakes are resembling one another, we're heading into post-truth world.

Fakery is getting more and more sophisticated. With Adobe in trying to solve this particular problem, it's giving a way for a solution where tools can extract the facts out of the increasing fictions.

Published: 
23/06/2018