Twitter Offers Cash 'Bounty' For Anyone Who Can Find Bias In Its Algorithms


Computers are smart, and their limitations are mostly based on their capacity in processing information.

This is why computers can become increasingly powerful when their hardware improves and becomes more efficient. That way, computers can help humans in lots of ways, in which they can do calculations much faster, and provide answers or solutions that are otherwise difficult to come up. But the process to do that, can sometimes surface flaws.

And the flaws happened to come from computers' flawed ways of understanding this world.

To make computers smart, researchers and tech companies can use AI and algorithms.

The methods involve training a system, in order for that system to understand what to do, when faced with similar issue it needs to solve.

But again, since AI is trained with data sets that can have biases, the system can surface answers that are also biased.

Dealing with the so-called "black box," Twitter is stepping up its attempt, by offering cash "bounty" to whoever can help it eliminate algorithmic bias.

The bounty program is based on the usual bug bounty programs found elsewhere on various websites and platforms, which offer anyone prices for finding security holes and vulnerabilities, according to Twitter executives Rumman Chowdhury and Jutta Williams.

However, Twitter's version of the bug bounty program would be "the industry's first algorithmic bias bounty competition," with prizes up to $3,500.

"Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they've already reached the public," wrote Chowdhury and Williams in a blog post.

"We want to change that."

"We're inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public."

"We want to cultivate a similar community... for proactive and collective identification of algorithmic harms."

The bug bounty program for finding bias in Twitter' algorithm is launched after a group of researchers previously found the algorithm tended to exclude Black people and men.

Amid people's growing concerns about automated algorithmic systems, this bug bounty program is part of a wider effort across the tech industry to ensure AIs act ethically.

This is why the social networking company said that its bounty program is aimed at identifying “potential harms of this algorithm beyond what we identified ourselves."

AI and algorithms have revolutionized computing by teaching computers how to make decisions based on real-world data instead of rigid programming rules.

This helps with messy and tedious tasks like understanding speech, screening spam and identifying people by the looks of their faces.

The algorithms that power AI, however, can be opaque and reflect problems in training data.

Twitter is just one out of the many tech companies and services that use AI and algorithms extensively.

This is why tackling algorithmic bias has become an increasingly important concern for Twitter.

Twitter knows that AI can cause problems if bias is present, or if its AI is not trained effectively.

This bounty program is Twitter's way to solidify the standards around ideas like representational harm.

Before this, Twitter launched an algorithmic fairness initiative, which scrapped its automated image-cropping system after its review found bias in the algorithm controlling the function.

The messaging platform said it found the algorithm delivered "unequal treatment based on demographic differences," with white people and males favored over Black people and females, and "objectification" bias that focused on a woman's chest or legs, described as "male gaze."

Read: Twitter's 'Responsible Machine Learning Initiative' Wants To Expose Its AI Biases