Only 18% Of Next Generation Computer Scientists Are Learning AI Ethics, Study Finds

06/07/2020

In the modern world of technology, where people are relying more on computer automations powered by AI, backlash over the technology's racial and gender biases is becoming an increasing concern.

Hopes are future data scientists can do better than the previous generation. But researchers are rather pessimistic.

According to new survey of 2,360 data science students, academics, and professionals by software firm Anaconda, Only 15% of instructors and professors said that they’re teaching AI ethics, and just 18% of students indicated that they’re learning about the subject.

After surveying respondents from more than 100 countries, Anaconda found the ethics gap to also extend from academia to industry.

While organizations can mitigate the problem through fairness tools and explainability solutions, neither of the solutions appear to be gaining mass adoption.

In the survey, Anaconda was found that only 15% of the respondents said that their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

AI ethics

.
In the publication, Peter Wang, CEO and co-founder of Anaconda, was quoted saying that:

"Data science has the ability to be transformational for businesses, but our 2020 survey shows that both organizations and professionals in the space are still in the process of maturing."

"From broadening the data science educational curriculum to being more intentional with open-source security, there are clear learnings here for the industry at large to implement in order to improve. We’ve seen positive progress in many of these areas, but there is still work to be done."

What should be noted in the study is that, the low figures aren't actually caused by the lack of interest.

Instead, nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curriculum.

Because of this, the authors of the study warn that the effect could have far-reaching consequences:

"Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions."

The study showed that while businesses and academics are increasingly talking about AI ethics, the fact means little because they don't really turn their eagerness and interest in the subject into real actions.

It has for a long time that humans are known to be error-prone and biased. But algorithms that run on computers and power AIs, are not necessarily better, simply because the technology learns from their human counterpart.

AI systems can be biased based on who built them, how they were developed, and how they are ultimately used.

It’s difficult for anyone to figure out exactly how systems might be susceptible to algorithmic bias, especially since this machine-learning technology often operates in a so-called 'black box'. Because of this, researchers don't really know how to decode the digital brains of AIs, aside from knowing how it will and should perform based on the data that built it.

What people will eventually know, is the end result.

In other words, people only know an AI has gone rogue or racist, after the AI is unleashed.

When that happens, it's already too late. This is why AI bias is problematic.