News
News Categories

Google’s AI will challenge the best Go player in the world

By Liu Hongzuo - on 28 Jan 2016, 11:30am

Google’s AI will challenge the best Go player in the world

After defeating European Go champion Fan Hui at five games in a closed-door showdown last October, Google’s AlphaGo artificial intelligence will be taking on Lee Se-dol, the top Go player of the world of the past decade. This is the first time where a computer program has ever beaten a professional Go player.

Go is a two-player board game with origins from Ancient China some 2,500 years ago. It goes by different names in various cultures. China recognizes it as weiqi, and it is called baduk in Korea – the Japanese refers to it as Go (or Igo). The names literally mean “encircling chess”, which points to the board game’s objective: a player would use game pieces to surround the most space on a 19 x 19 grid, while denying their opponent from doing the same. Ancient Chinese scholars are expected to be proficient at Go, since it was regarded as one of the four essential arts that made a scholar-gentleman.

According to Google, designing an AI that can defeat professional Go players is more complex than creating AI for checkers or chess. These other games utilize traditional AI methods, which involves constructing a search tree to suss out possible moves and positions. Google said that existing Go AI can only play as well as amateurs, and that the game is far too complex for traditional programming:

But as simple as the rules are, Go is a game of profound complexity. There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000 possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess. – Google Blog

Google used a different approach to create AlphaGo – they utilized an advanced search tree infused with deep neural networks. These neural networks would read the board, and process it through 12 different layers with neuron-like connections. These layers include networks like the “policy network”, which chooses a possible move to play, and the “value network”, which predicts a game’s winner. AlphaGo was made by Google DeepMind, a British AI company that was acquired by Google in 2014.

With computing power supported by the Google Cloud Platform, Google trained the AI’s neural networks with 30 million possible moves from games played by human experts. The AI is capable of machine learning - it went on to play thousands of games within its own neural networks. It adjusted its neuron-like connections by trial-and-error to determine the best rewards for its actions.

The upcoming five-game challenge against Lee Se-dol will be happening in Seoul this coming March 2016, and the livestream will be available on DeepMind's YouTube channel. Google DeepMind hopes to use advanced AI like AlphaGo for solving real-world problems in the future. For now, the next Go showdown will be their priority.

Source: Google via Ars Technica

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.