Time flies, and OpenAI is growing so fast that it has more than a handful of things to deal with.
Since it introduced ChatGPT, the company has cemented itself as one of the leaders in the AI field. And with more and more users using its products, and the smarter its Large Language Models have become, OpenAI is putting out an open call for experts.
Through what it calls the 'Red Teaming Network', the company wants people from diverse fields to help it in evaluating and stress testing its AI models.
The goal is to identify potential risks and improve the safety of systems like ChatGPT and DALL·E, DALL·E 2 before their public, official release.
This initiative aligns with OpenAI's stated mission of developing AI, or particularly, AGI, that broadly beneficial to everyone.
We're inviting domain experts from a variety of fields to join the OpenAI Red Teaming Network. Apply to collaborate with us to improve the safety of our models: https://t.co/YiwgsNEXYA
— OpenAI (@OpenAI) September 19, 2023
According to OpenAI in the announcement:
The term "red teaming" is often used to describe a practice of identifying bugs and vulnerabilities inside systems and products by simulating adversarial attacks.
This has been a part of OpenAI’s iterative deployment process for AI systems.
While the company has previously engaged with external experts for similar evaluations, Red Teaming Network seeks to establish more continuous and iterative input from a trusted community of experts.
In return, OpenAI said that by joining, there are benefits:
The things OpenAI asks, extend far beyond traditional computer science or AI research, encompassing domains such as biology, law, and even linguistics. Participants also range from individual subject-matter experts to research institutions and civil society organizations.
This multidisciplinary approach aims to capture a 360°-view of the risks and opportunities associated with AI technologies.
Applicants who wish to be part of the team, must sign a non-disclosure agreements, and be compensated for red teaming projects commissioned by the company.
The time OpenAI expects members to commit, could be as little as 1 hour per month. The company is also not expecting members to contribute every month.
OpenAI also said they will selectively tap network members for projects based on the right fit, rather than involving every expert in testing each new model.
Along with red teaming, OpenAI pointed to other collaborative opportunities for experts to help shape safer AI, like contributing evaluations to their open source repository.
As AI capabilities rapidly advance, robust testing by diverse experts provides a check on potential harms. Red Teaming Network helps provide OpenAI the access to the boarder community, by tapping into their unique expertise, as its way to help shape the development of safer AIs.