AI Bias Problem Must Be Fixed To Create Smarter AIs

Kay Firth-Butterfield
Head of AI and ML at World Economic Forum

Artificial intelligence (AI) with machine learning (ML) can do a lot of things, and all of those things are meant to help its human counterpart solve problems in the real world.

As AI becomes more ubiquitous with smart computers, we need ethical considerations around privacy, bias, transparency and accountability. This is because people are using AI systems to make decisions that will affect people.

Companies must teach AI with care, and eliminating bias is a good place to start. We need to "raise" AI to act as a responsible representative, and a contributing member of society.

As a barrister-at-law and former part-time judge, Kay Firth-Butterfield who heads Artificial Intelligence and Machine Learning at the World Economic Forum has developed a specialty of battling injustice before it happen. And this is where she focuses on preventing us in making repeating mistakes of the past as we build the technology of the future.

One of her key focuses, is ensuring that we don't pass biases into the AI that increasingly makes decisions about credit worthiness to medical diagnoses.

 Kay Firth-Butterfield wants to help stop AI biases

From gender to racial biases, AI learns from datasets given by humans. And this is where we need to comb through the data to correct biased correlations, such as the suggestion that men are doctors while women are nurses.

"Where bias is built-in, it starts with the biases of the people who are coding the AI," she explained. Companies have been doing some work to eliminate such biases, but to make this effort fruitful, "you need to have a human being checking the biases, and that human being might have biases themselves."

"The other way that bias gets into the machine is, of course, because machine learning depends on data and all data is historic," said Firth-Butterfield. "That means we are seeing prejudices that we as humans have exhibited coming through into the algorithm."

"When we're talking about bias, we're worrying first of all about the focus of the people who are creating the algorithms," Firth-Butterfield said. "We need to make the industry much more diverse."

"We need to recruit a diverse group of people," she said, noting that AI has been developed mostly by men. After all, Silicon Valley has a long history of being criticized for its lack of diversity.

"But not just women: race, age, persons from the developing world and so on. Diversity, generally, needs to be thought about."

Kay Firth-Butterfield that was a co-founder of the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas, taught courses on law and emerging technologies in which she explored AI's potential to promote equality.

"If you’re thinking about gender, for example, a machine that can look at someone’s face and decide whether that person is gay or not, what are the ethics surrounding that? Should you be using that tool? Should it have been created?"

Solving this issue should make AI smarter because without biases, AIs can really help humanity. But before that happens, "we need to break down social issues when it comes to technology," because "technology isn’t a solution to our social issues.

According to Firth-Butterfield, diversity is the key, and her role is to help ensure that the technology doesn’t leave humanity behind. This is where we need to battle injustice before it happens.