AI is only as good as the data it learned from. And it seems that Twitter has been using a biased data set, just like about everyone else in the industry.
It started when a bunch of Twitter users tweeted about how Twitter' image cropping tool seemed to have a bias towards fair-skinned people, by putting less importance on people of color.
As one of the examples, users who posted uncropped images containing both light and dark-skinned people, would likely see that Twitter put the light-skilled person in the preview.
Twitter initially defended its algorithm, saying that at the time, it had tested it for bias, and found no "evidence of racial or gender bias”.
But seeing that the problem isn't going away, Twitter added that: “It’s clear from these examples that we’ve got more analysis to do."
"We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate."
We saw your Tweets about the harm caused by how images are cropped on Twitter.
Today we’re sharing how we test for bias in our systems, and how we plan to rely less on auto-cropping and give you more choice in how images appear in Tweets:
https://t.co/tiSreeoGOA— Twitter Support (@TwitterSupport) October 1, 2020
On its blog post, Twitter wrote that:
Twitter explained that its machine-learning algorithm relies on a measure called saliency.
In other words, Twitter's algorithm that is used on its cropping tool prefers choosing a higher-quality part of an image that is more noticeable and important, and crop out the rest. The algorithm does this based on its mere prediction on what people might look first in an image.
Twitter said that it didn't see any bias.
But since people have different taste and preferences, this kind of machine learning has the "potential for harm."
Because of this, Twitter is giving the control back to users.
So going forward, Twitter that becomes more transparent on how its AI works, is committed to following the "what you see is what you get” principles of design, meaning that the image user see in the preview is what it will look like in a tweet.
"There may be some exceptions to this, such as photos that aren’t a standard size or are really long or wide. In those cases, we’ll need to experiment with how we present the photo in a way that doesn’t lose the creator’s intended focal point or take away from the integrity of the photo," Twitter explained.
Bias in AI and machine-learning technology is already an industry-wide issue.
Bias is something that should be addresses to create a smarter AI, as Kay Firth-Butterfield, the Head of AI and ML at World Economic Forum, once said. But since biases were often caused by human-sourced biases, advancements of AI can be quite difficult.
This is why the issue on Twitter is something that is expected to happen.
"There’s lots of work to do, but we’re grateful for everyone who spoke up and shared feedback on this. We’re eager to improve and will share additional updates as we have them."