To Safeguard Unsafe Uses Of AI Requires 'Curbing The Power Of The Companies Who Develop It'

Timnit Gebru
co-founder of Black in AI, founder of the Distributed Artificial Intelligence Research Institute, former co-lead of Google's Ethical AI Team

Not everyone can see it coming, and Timnit Gebru tried to warned everyone. Unfortunately, she was fired by her employers for trying.

Her job was to ensure that AI products will not perpetuate racism or other inequalities.

As a researcher of color, she has lots of experiences under her belt, and has spoken in public about many issues concerning AIs.

But it was her research paper that explained how AIs can develop racism and sexism, which led to her departure.

Being ousted from Google, Timnit Gebru was ready to embark on a new phase in her career. And this is when she started to go vocal.

"The No. 1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it."
Timnit Gebru.
Timnit Gebru, a respected AI researcher, fired for questioning biases that are built into AI systems. (Credit: Cody O'Loughlin for The New York Times)

When Gebru joined Google back in 2018, tech companies in Silicon Valley were already pouring huge money into AI developments.

AI was already the hype, and tech companies were trying to push beyond the boundaries, cross milestones after milestones, and create better and smarter AIs, each and every time.

The idea was that, the more the data and the more the processing power, the better they can train and make AIs to perform a wide array of tasks, like speech recognition, identifying a face in a photo or targeting people with ads based on their past behavior.

In theory, by feeding AIs with enormous amounts of data, high-powered machines would create more powerful AIs that would eventually benefit their owners, by generating billions of dollars in profits.

But this is where things start going bad.

"I am very concerned about the future of AI, Gebru once said.

"Not because of the risk of rogue machines taking over. But because of the homogeneous, one-dimensional group of men who are currently involved in advancing the technology."

Most people in the industry are white male, and not female, and not people of color, and not Gebru at all.

When an AI is trained on data that reflects inequalities—as most data from the real world does—the system will project those inequalities into the future.

In other words, Gebru knows how AI is dangerously biased.

"We need to let people who are harmed by technology imagine the future that they want."

She thought of this because she considered the enthusiasm around AI models was leading the industry in a worrying direction.

Things go bad to worse when companies race to build ever bigger data sets, and companies had begun to build programs that could scrape text from the Internet to use as training data.

"This means that white supremacist and misogynistic, ageist, etc., views are overrepresented," she said.

Referring her opinion to Google, she said that she had a lot of issues while working at the company.

But among the many issues, "the censorship of my paper was the worst instance," she said.

Putting that aside, Gebru is considered to be one of the world’s most respected ethical AI researchers.

Gebru is a leading figure in a constellation of scholars, activists, regulators and technologists collaborating to reshape ideas about what AI is and what it should be.

Gebru used her expertise to bring ethics to AI, realizing the overwhelming lack of diversity in the AI, by reflecting on her past experiences she had when she was younger.