AI Now 2019 Report About Algorithm Bias, Discrimination, Ethics, And Principles

16/12/2019

AI has came a long way before entering people's lives through the internet, smart devices and from others in numerous industries. But still, the technology is relatively young.

To make the technology works, it needs a lot of data to begin with. To do this, companies, organizations and researchers seek and/or outsource their sources, in order to get the data they need to train their AI models.

What matters here, and what has become a concern is that the sources where the data comes from.

"Community groups, workers, journalists, and researchers—not corporate AI ethics statements and policies—have been primarily responsible for pressuring tech companies and governments to set guardrails on the use of AI.” AI Now 2019 report said

The annual report from AI Now is a detailed the social impact that AI uses have on humans, communities, and the population at large. AI Now gathers its information and analysis from experts in multiple disciplines around the world and works closely with partners throughout the IT, legal, and civil rights communities.

AI surveillance
CCTV footage taken in Beijing uses the AI-powered face recognition system. This kind of nationwide surveillance concerns privacy and people's civil rights.

In its 2019 report, AI Now begins with twelve recommendations based on the institute’s conclusions:

  1. Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities.
  2. Government and business should halt all use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place.
  3. The AI industry needs to make significant structural changes to address systemic racism, misogyny, and lack of diversity.
  4. AI bias research should move beyond technical fixes to address the broader politics and consequences of AI’s use.
  5. Governments should mandate public disclosure of the AI industry’s climate impact.
  6. Workers should have the right to contest exploitative and invasive AI—and unions can help.
  7. Tech workers should have the right to know what they are building and to contest unethical or harmful uses of their work.
  8. States should craft expanded biometric privacy laws that regulate both public and private actors.
  9. Lawmakers need to regulate the integration of public and private surveillance infrastructures.
  10. Algorithmic Impact Assessments must account for AI’s impact on climate, health, and geographical displacement.
  11. Machine learning researchers should account for potential risks and harms and better document the origins of their models and data.
  12. Lawmakers should require informed consent for use of any personal data in health-related AI.

AI Now suggests that corporations and governments should stop monetizing AIs when it starts affecting people's social and ethical accountability.

A lack of regulation and ethical oversight has lead to a surveillance of citizens already happening in several countries. And because of the black box problem, AI has been proven to be inherently biased.

AI Now notes that "we saw a wave of pushback, as community groups, researchers, policymakers, and workers demanded a halt to risky and dangerous AI." But also pointed out that this has done relatively little to slow the flow of harmful AI:

The report also explained the “affect recognition” AI, a subset of facial recognition, that has made its way into schools and businesses around the world. The institute suggested, again that AI can be biased and discriminate people.

AI Now warns that these problems - biased AI, discriminatory facial recognition systems, and AI-powered surveillance = cannot be solved by only patching systems or tweaking algorithms. We can’t “version 2.0” to solve these AI issues.

"AI Now’s 2019 report spotlights these growing movements, examining the coalitions involved and the research, arguments, and tactics used. We also examine the specific harms these coalitions are resisting, from AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and underrepresented communities."

"What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who don’t. The way in which AI is increasing existing power asymmetries forms the core of our analysis, and from this perspective we examine what researchers, advocates, and policymakers can do to meaningfully address this imbalance."