'Red Teaming Network' Is Where OpenAI Invites Experts To Help It Improve Its AI

OpenAI Red Teaming Network

Time flies, and OpenAI is growing so fast that it has more than a handful of things to deal with.

Since it introduced ChatGPT, the company has cemented itself as one of the leaders in the AI field. And with more and more users using its products, and the smarter its Large Language Models have become, OpenAI is putting out an open call for experts.

Through what it calls the 'Red Teaming Network', the company wants people from diverse fields to help it in evaluating and stress testing its AI models.

The goal is to identify potential risks and improve the safety of systems like ChatGPT and DALL·E, DALL·E 2 before their public, official release.

This initiative aligns with OpenAI's stated mission of developing AI, or particularly, AGI, that broadly beneficial to everyone.

According to OpenAI in the announcement:

"Assessing AI systems requires an understanding of a wide variety of domains, diverse perspectives and lived experiences. We invite applications from experts from around the world and are prioritizing geographic as well as domain diversity in our selection process."

The term "red teaming" is often used to describe a practice of identifying bugs and vulnerabilities inside systems and products by simulating adversarial attacks.

This has been a part of OpenAI’s iterative deployment process for AI systems.

While the company has previously engaged with external experts for similar evaluations, Red Teaming Network seeks to establish more continuous and iterative input from a trusted community of experts.

In return, OpenAI said that by joining, there are benefits:

"This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact. By becoming a part of this network, you will be a part of our bench of subject matter experts who can be called upon to assess our models and systems at multiple stages of their deployment."
The OpenAI Red Teaming Network application asks applicants about their domains of expertise.
The OpenAI Red Teaming Network application asks applicants about their domains of expertise.

The things OpenAI asks, extend far beyond traditional computer science or AI research, encompassing domains such as biology, law, and even linguistics. Participants also range from individual subject-matter experts to research institutions and civil society organizations.

This multidisciplinary approach aims to capture a 360°-view of the risks and opportunities associated with AI technologies.

Applicants who wish to be part of the team, must sign a non-disclosure agreements, and be compensated for red teaming projects commissioned by the company.

The time OpenAI expects members to commit, could be as little as 1 hour per month. The company is also not expecting members to contribute every month.

OpenAI also said they will selectively tap network members for projects based on the right fit, rather than involving every expert in testing each new model.

Along with red teaming, OpenAI pointed to other collaborative opportunities for experts to help shape safer AI, like contributing evaluations to their open source repository.

As AI capabilities rapidly advance, robust testing by diverse experts provides a check on potential harms. Red Teaming Network helps provide OpenAI the access to the boarder community, by tapping into their unique expertise, as its way to help shape the development of safer AIs.

Published: 
21/09/2023