Google's AI beats human Go champion

23/05/2017

Google made an announcement after its DeepMind AI defeated a human champion Go player. The company's team has shown that the skills needed to master Go are not so uniquely human after all after DeepMind was able to defeat the European Go champion, Fan Hui, five games to zero.

AI winning the game is Google's new threshold for machine intelligence.

Go is board game developed in ancient China more than 2,500 years ago. The aim is to surround more territory than the opponent. Defeating a human player is considered an accomplishment since Go is a complex and complicated game.

"Go is the most complex and beautiful game ever devised by humans," said Demis Hassabis, head of the Google team, describing himself an avid Go player. By beating Fan Hui, he added, "our program achieved one of the long-standing grand challenges of AI."

DeepMind announced its accomplishment in a paper published in the research journal Nature. In Google's post and a paper published, DeepMind researchers revealed how the system was constructed and how it was able to succeed where decades of previous Go systems have failed.

Google unveiled how the company trained the machine. With its AlphaGo, the program had trained itself to win using the advanced AI techniques that DeepMind is known for. Before DeepMind was acquired by Google, the company released two papers demonstrating its algorithms teaching themselves to beat classic Atari games with alarming speed.

Google isn't the only company using deep learning to develop a Go-playing AI. Facebook has previously said that it has a researcher working on such a system, touted by Mark Zuckerberg in a Facebook post just days before the Google announcement.

Further reading: AlphaGo Zero Learns The Game 'Go' From Scratch And Defeated The Original AlphaGo