New Computer Learns How to Play Expert-Level Chess in Just 72 Hours

A deep learning artificial intelligence algorithm figured out how to play chess better than most humans in just three days.
New Computer Learns How to Play Expert-Level Chess in Just 72 Hours
Chess World Champion Vladimir Kramnik (Russia) concentrates on the podium during his match against German chess computer Deep Fritz on November 29, 2006 in Bonn, Germany. (Juergen Schwarz/Bongarts/Getty Images)
Jonathan Zhou
9/15/2015
Updated:
9/15/2015

It’s been nearly two decades since computers have been able to beat the best human chess players. Since then, the best algorithms have only improved, such that the leading chess players in the world get walloped in a head-to-head match with a machine, and can barely scrape by when the machine is given a severe handicap.

Still, the leading chess programs don’t play chess like humans do, and rely on brute force calculation—mapping hundreds of millions of scenarios per second when deciding which move to make—instead of intuition or wisdom.

This has led some observers to say that chess programs aren’t really playing chess at all.

Human chess players are often able to assess a situation at a glance by building a knowledge of generalities and their exceptions, but this has been difficult to encode into computers, which must usually rely on universal rules.

Now, a group of researchers at the Imperial College London have built an algorithm that can learn how to play chess by itself instead of relying on hard rules—and consequently, it played more like a human, and required far less computational processing to achieve the same level of skill as other chess programs.

The algorithm was, unsurprisingly, a deep learning one. Deep learning algorithms have been a mainstay of artificial intelligence (AI) innovation over the past decades, making profound advances in image and voice recognition for companies like Google and Facebook, and have even been deployed to perform oddities like learning to paint in the style of old and new masters.

What differentiates deep learning algorithms from conventional programs is that the former learns how to behave by training itself on a data-set instead of simply executing a set of rules. For example, training an algorithm to distinguish between a car seat and a steering wheel involves feeding it thousands of images of labeled pictures of seats and wheels.

Chess is a little trickier. Feeding it the data of recorded matches wouldn’t work for the same reason a chess player can’t get great just by observing how others play—there’s no understanding of the reason behind each decision. Instead, the program, called Giraffe, plays against itself, and is assigned other chess programs as tutors that give Giraffe “feedback” in regular intervals through the game, allowing the program to improve over time.

After 72 hours of training, Giraffe was able to play chess as well as an international master, which puts it on par with the top 2.2 percent of human chess players with an official chess rating worldwide. What makes the Giraffe a noteworthy achievement is not the level of play, but the breathtaking ease and rapidity with which it reached such a skill level, as well as the minimal computational and human resources it consumed.

“Giraffe’s optimized implementation of neural network ... allows it to search at a speed that is less than 1 order of magnitude slower than the best modern chess engines, thus making it quite competitive against many chess engines in gameplay without need for time handicap,” the researchers wrote in their paper.

Giraffe’s understanding of strategy was tested by running it through the The Strategic Test Suite, which gauges how well the program understands 15 different strategies across 1,500 positions, and it scored at a comparable rate to some of the leading chess programs, a stunning feat given that the other programs “are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” states the paper.

The researchers did have grandmasters play against early versions of Giraffe to see where the program could be improved, but the program remains free of the manual fine-tuning that most other programs are subject to.

Beyond chess, the researchers said that the basic components of Giraffe could be grafted onto “other zero-sum turn-based board games, and achieve state-of-art performance quickly, especially in games where there has not been decades of intense research into creating a strong AI player.”