When humans are capable of creating machines that can think by themselves, man and machines can have different opinions.
When an AI system learns and receive inputs to create outputs, their operations are not visible to the user or another interested party. While humans know how to make machines think, humans don't exactly know why machines come with a certain result and not the other.
This is called the AI black box, which in a general sense, is an impenetrable system.
This happens when the inner workings of an AI software are not easily viewed or understood.
Black box can go unnoticed until they cause problems so large that it becomes necessary to investigate. One of the most common example, is AI bias.
Sometimes, AI can be a racist, simply because it learned from datasets that contains some conscious or unconscious prejudices of their creators. In other cases, AI going rogue can result from undetected errors, among others.
Twitter is the microblogging platform popular on the web. It uses AI to make some decisions, and has experienced its its own shares of controversies, like this one below:
We tested for bias before shipping the model & didn't find evidence of racial or gender bias in our testing. But it’s clear that we’ve got more analysis to do. We'll continue to share what we learn, what actions we take, & will open source it so others can review and replicate.
— Twitter Comms (@TwitterComms) September 20, 2020
Among other reasons, this is why the company launched what it calls the Responsible Machine Learning Initiative, or Responsive ML. In a blog post:
According to Twitter, this Responsible ML consists of the following pillars:
- Taking responsibility for Twitter's algorithmic decisions.
- Equity and fairness of outcomes.
- Transparency about Twitter's decisions and how it arrived at them.
- Enabling agency and algorithmic choice.
In other words, Responsible ML wants to shed some light to create what it's called an explainable AI, by simply providing more transparency.
Twitter knows that it can be next to impossible for it to earn people's trust in AI tools that make crucial decisions in an opaque way without proper explaining of its rationale.
And this is particularly true in areas and subjects that people do not want to completely delegate the decisions to machines.
Here, Responsible ML wants to first deal with gender and that racial bias analysis, as well as analyzing the fairness assessment of Twitter's Home timeline recommendations across racial subgroups, and an analysis of content recommendations for different political ideologies across seven countries
Responsive ML has the goal of providing Twitter the findings to to improve the experience on the platform.
"This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community," Twitter said.
In the past, tech companies tend to defend their AI projects, saying that they are using responsible AI initiatives. Twitter is using a different approach, as it wants to put more transparency to the opaque subject, which should be more appealing to ethicists.
One of the first that agrees with this, is Margaret Mitchell, a former member if Google’s ethical AI team.
Cool ideas here, unique to the Twitter approach:
-community-driven ML, agency and choice
-studying effects over time
-in-depth assessment of harms
Excited about where this work could head. Congrats to @ruchowdh @quicola @williams_jutta!https://t.co/45dUMvlsXn
— MMitchell (@mmitchell_ai) April 14, 2021