Twitter Explores Ways To Fix Its Biased AI-Powered Image Cropping Tool

4 Twitters, 3 whites, 1 black

AI is only as good as the data it learned from. And it seems that Twitter has been using a biased data set, just like about everyone else in the industry.

It started when a bunch of Twitter users tweeted about how Twitter' image cropping tool seemed to have a bias towards fair-skinned people, by putting less importance on people of color.

As one of the examples, users who posted uncropped images containing both light and dark-skinned people, would likely see that Twitter put the light-skilled person in the preview.

Twitter initially defended its algorithm, saying that at the time, it had tested it for bias, and found no "evidence of racial or gender bias”.

But seeing that the problem isn't going away, Twitter added that: “It’s clear from these examples that we’ve got more analysis to do."

"We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate."

On its blog post, Twitter wrote that:

"We’re always striving to work in a way that’s transparent and easy to understand, but we don’t always get this right. Recent conversation around our photo cropping methods brought this to the forefront, and over the past week, we’ve been reviewing the way we test for bias in our systems and discussing ways we can improve how we display images on Twitter."

Twitter explained that its machine-learning algorithm relies on a measure called saliency.

In other words, Twitter's algorithm that is used on its cropping tool prefers choosing a higher-quality part of an image that is more noticeable and important, and crop out the rest. The algorithm does this based on its mere prediction on what people might look first in an image.

Twitter said that it didn't see any bias.

But since people have different taste and preferences, this kind of machine learning has the "potential for harm."

Twitter

Because of this, Twitter is giving the control back to users.

"We are prioritizing work to decrease our reliance on ML-based image cropping by giving people more visibility and control over what their images will look like in a Tweet. We’ve started exploring different options to see what will work best across the wide range of images people Tweet every day. We hope that giving people more choices for image cropping and previewing what they’ll look like in the Tweet composer may help reduce the risk of harm. "

So going forward, Twitter that becomes more transparent on how its AI works, is committed to following the "what you see is what you get” principles of design, meaning that the image user see in the preview is what it will look like in a tweet.

"There may be some exceptions to this, such as photos that aren’t a standard size or are really long or wide. In those cases, we’ll need to experiment with how we present the photo in a way that doesn’t lose the creator’s intended focal point or take away from the integrity of the photo," Twitter explained.

Bias in AI and machine-learning technology is already an industry-wide issue.

Bias is something that should be addresses to create a smarter AI, as Kay Firth-Butterfield, the Head of AI and ML at World Economic Forum, once said. But since biases were often caused by human-sourced biases, advancements of AI can be quite difficult.

This is why the issue on Twitter is something that is expected to happen.

"We’re aware of our responsibility, and want to work towards making it easier for everyone to understand how our systems work. While no system can be completely free of bias, we’ll continue to minimize bias through deliberate and thorough analysis, and share updates as we progress in this space."

"There’s lots of work to do, but we’re grateful for everyone who spoke up and shared feedback on this. We’re eager to improve and will share additional updates as we have them."

Published: 
01/10/2020