In 1952, a computer program mastered tic-tac-toe. More than 40 years later, in 1994, a program became the checkers world champion. IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997 and today’s algorithms can easily defeat the world’s leading players.
But the game of Go has held out from being dominated by machines. The number of possible positions in Go, which is usually played on a 19 by 19 board with 361 intersections, is vastly larger than that in chess. This makes the traditional “tree search” model used by chess programs ineffectual for Go.
For years, the best Go computer programs could only defeat skilled amateurs at best, which is why the defeat of Fan Hui, the three-time European Go champion, by Google’s algorithm AlphaGo has been called a breakthrough.
The program didn’t defeat Hui by merely performing more calculations than its predecessors. It combined the traditional search for possible positions, as used by programs like Deep Blue, with neural networks utilizing parallel computing. The program trained on millions of games played by humans, then “self-trained” by playing against itself.
https://www.youtube.com/watch?v=SUbqykXVx0A