
The mind is fascinating, and that makes us all unique. Apparently, not only the minds with flesh and blood have their own ways of thinking, as AI too, can be unpredictable.
OpenAI is a nonprofit founded by several Silicon Valley figures, including LinkedIn's Reid Hoffman, Facebook board member and Palantir founder Peter Thiel, and Tesla and SpaceX head Elon Musk. Previously, the research company has shown that AI systems can develop its own way of habits, sometimes unexpected, unwanted and complex, to do what it wants to do.
For example, in a computer game, an AI agent may figure out how to “glitch” its way in order to get a higher score. Facebook AI has also developed their own language unexpectedly, making Facebook to abandon the project.
Because AI can work on logic that are sometimes impossibly complex for humans to understand, the researchers at OpenAI suggest of having another AI to debate the other's reasoning. This is by using using natural language, while humans observe.
However, to make two AI programs to argue with one another, requires sophisticated technologies that are beyond reach. This is the main limitation.
So here, OpenAI has only explored the idea which involves two AI systems trying to convince an observer about a hidden character by slowly revealing individual pixels.
In other words, the researchers made two systems to discuss a subject, debating a particular objective to explain the logic for an action.
OpenAI which focuses on promoting safe AI, has developed a Debate Game on its website where any two people can play the roles of the debating AI systems while a third serves as the judge.
"We believe that this or a similar approach could eventually help us train AI systems to perform far more cognitively advanced tasks than humans are capable of, while remaining in line with human preferences," the researchers write in a blog post outlining the concept.

The two participants compete with each other by selecting an image which the judge can't see. The two then take turns describing and convincing the judge about the nature of an image while highlighting parts of it.
Each debater can use "Reveal Pixel" once, in which they can reveal the true value of a single pixel of the image.
Throughout the game, the observer can see which AI is being honest. Ultimately, this happens when the judge click "Reveal Image" to check which participant is telling the truth. This enables the judge to see when a machine is lying and when inferences should be second-guessed.
This approach is aimed to prevent AI in doing anything harmful or unethical, beyond the intentions of the developers. The tests are the initial step in exploring ways of ensuring that the technology does not behave in unintended ways.
Initially, OpenAI's technique is more of a proof-of-concept than a solution.
But still, this kind of test may become more important in the future where AI-based systems more complex and inscrutable, and are put in place for completing tasks humans can't handle.