Google's Go-playing AI takes on the world champion - here's how to watch live

Advertisement

Advertisement
Lee Sedol 002 (1).JPG

Google DeepMind

Lee Se-dol

Computers have already bested humans at chess, checkers, and on the TV game show Jeopardy!, and now one will attempt to assert its dominance in the ancient board game of Go.

In January, an AI program developed by Google's DeepMind group called AlphaGo beat the European human champion. This week, it will go head-to-head with Korean player Lee Se-dol, the world's top Go player over the past decade, in a five-match tournament in Seoul, South Korea, which will be streamed live via YouTube.

The matches will take place March 9, 10, 12, 13 and 15 local time (the day before in the US). The first match will start at 1 pm local time Wednesday (11 pm EST/8 pm PST Tuesday).

Currently, AlphaGo is the favorite to beat Sedol, but it's a close matchup. If AlphaGo wins, it will cement its place in AI history.

Building a champion Go-playing AI

Advertisement

DSC_1026.JPG

Google DeepMind

Go is a two-player board game developed 2,500 years ago in ancient China, and is probably one of the most complicated games ever created. According to Google's official blog, there are more possible moves in Go than there are atoms in the universe.

The board consists of a grid of intersecting lines whose standard size is 19 x 19 squares. Each player places black or white game pieces, called stones, on the board in an attempt to surround the opponent's pieces. The goal of is to surround the largest area of the board by the game's end, which is reached when neither player wishes to make another move.

The AlphaGo program combines two powerful forms of AI:

  • Monte Carlo tree search: This involves choosing moves at random and then simulating the game to the very end to find a winning strategy
  • Deep neural networks: A 12 layer-network of neuron-like connections that consists of a "policy network" that selects the next move and a "value network" that predicts the winner of the game

AlphaGo was trained on 30 million moves from games played by the best players on the KGS Go game server. It also played thousands of games against itself, improving by reinforcement learning, in which good moves are rewarded. This required a huge amount of computing power, which fortunately, Google has.

AlphaGo beat the human European Go champion Fan Hui by five games to none, the team reported in a study published in January in the journal Nature. It also beat other AI programs in all but one of 500 games.

Computers have already beaten humans at chess, checkers, and Jeopardy, and they have been making inroads on games like poker. But a victory at Go was thought to at least a decade away.

Advertisement

All matches will be streamed live with English subtitles on YouTube, or you can watch it below:

NOW WATCH: Watch astronaut Scott Kelly's epic journey back to Earth in 60 seconds