Twitter's 'Responsible Machine Learning Initiative' Wants To Expose Its AI Biases

Twitter

When humans are capable of creating machines that can think by themselves, man and machines can have different opinions.

When an AI system learns and receive inputs to create outputs, their operations are not visible to the user or another interested party. While humans know how to make machines think, humans don't exactly know why machines come with a certain result and not the other.

This is called the AI black box, which in a general sense, is an impenetrable system.

This happens when the inner workings of an AI software are not easily viewed or understood.

Black box can go unnoticed until they cause problems so large that it becomes necessary to investigate. One of the most common example, is AI bias.

Sometimes, AI can be a racist, simply because it learned from datasets that contains some conscious or unconscious prejudices of their creators. In other cases, AI going rogue can result from undetected errors, among others.

Twitter is the microblogging platform popular on the web. It uses AI to make some decisions, and has experienced its its own shares of controversies, like this one below:

Among other reasons, this is why the company launched what it calls the Responsible Machine Learning Initiative, or Responsive ML. In a blog post:

"The journey to responsible, responsive, and community-driven machine learning (ML) systems is a collaborative one. Today, we want to share more about the work we’ve been doing to improve our ML algorithms within Twitter"

According to Twitter, this Responsible ML consists of the following pillars:

  1. Taking responsibility for Twitter's algorithmic decisions.
  2. Equity and fairness of outcomes.
  3. Transparency about Twitter's decisions and how it arrived at them.
  4. Enabling agency and algorithmic choice.
"Responsible technological use includes studying the effects it can have over time. When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product."

In other words, Responsible ML wants to shed some light to create what it's called an explainable AI, by simply providing more transparency.

Twitter knows that it can be next to impossible for it to earn people's trust in AI tools that make crucial decisions in an opaque way without proper explaining of its rationale.

And this is particularly true in areas and subjects that people do not want to completely delegate the decisions to machines.

Here, Responsible ML wants to first deal with gender and that racial bias analysis, as well as analyzing the fairness assessment of Twitter's Home timeline recommendations across racial subgroups, and an analysis of content recommendations for different political ideologies across seven countries

Responsive ML has the goal of providing Twitter the findings to to improve the experience on the platform.

"This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community," Twitter said.

In the past, tech companies tend to defend their AI projects, saying that they are using responsible AI initiatives. Twitter is using a different approach, as it wants to put more transparency to the opaque subject, which should be more appealing to ethicists.

One of the first that agrees with this, is Margaret Mitchell, a former member if Google’s ethical AI team.

Published: 
14/04/2021