A Computer Just Beat the World ‘Go’ Champion for the First Time Ever

For years, chess was considered not just a game, but a spiritual exercise as well, an activity that tapped into an essential element in the human mind.
A Computer Just Beat the World ‘Go’ Champion for the First Time Ever
Jonathan Zhou
3/9/2016
Updated:
3/9/2016

For years, chess was considered not just a game, but a spiritual exercise as well, an activity that tapped into an essential element in the human mind. 

Then reigning world champion Garry Kasparov was defeated by IBM’s Deep Blue computer in 1997. 

But years after computers rose far above humans in chess playing capability, its Asian counterpart, Go, held out. Go grandmasters soundly defeated computers, who couldn’t use “tree-branch” computation to map out all the possible moves on the Go board, which is much bigger, and thus contained more possible moves, than a chess board. 

But earlier this year, Google’s AlphaGo AI, using novel machine learning technology, defeated a Go grandmaster for the first time, and mere months later, it took down the world champion, the legendary South Korean player Lee Se-Dol. 

Google’s computer beat the world champion in the first game of a five-game series in Seoul, South Korea, the New Scientist reports. 

“This is history, you saw it folks,” said Chris Garlock, managing editor of the American Go E-Journal and a commentator of the game. 

Lee Se-Dol, a legendary South Korean player of Go - a board game widely played for centuries in East Asia - speaks beside a backdrop of a Go board and its pieces (L) during a press briefing on the Google DeepMind Challenge Match at Korea Baduk Association in Seoul on February 22, 2016. (JUNG YEON-JE/AFP/Getty Images)
Lee Se-Dol, a legendary South Korean player of Go - a board game widely played for centuries in East Asia - speaks beside a backdrop of a Go board and its pieces (L) during a press briefing on the Google DeepMind Challenge Match at Korea Baduk Association in Seoul on February 22, 2016. (JUNG YEON-JE/AFP/Getty Images)

Ben Lockhart, a top US amateur Go Player, told the New Scientist that he “felt emotional and dizzy, and stepped outside for a minute,” after watching the match. 

The match was a major event in South Korea, where it was broadcast on TV. It was also broadcast in China and Japan, where the game is also popular. 

The progress at which Google’s computer was learning how to play Go was shocking to the South Korean champion, who had predicted that he would easily defeat the computer, based on an analysis of its skill level when it defeated the European grandmaster Fan Hui in January. 

“Looking at the match in October, I think (AlphaGo’s) level doesn’t match mine,” Lee told Yonhap News

One of the more curious implications of AlphaGo is that its creators don’t understand how the computer makes its decisions, its neural networks “learn” how to play Go on its own, and that decision making process is opaque.

“As the use of deep neural network systems spreads into everyday life—they are already used to analyze and recommend financial transactions—it raises an interesting concept for humans and their relationships with machines. The machine becomes an oracle; its pronouncements have to be believed,” Nature’s editorial board wrote when AlphaGo had first defeated the European Go grandmaster. 

You can watch Match 2 streamed on YouTube starting later tonight, and the record of Match 1 here.