Checkmate: Artificial Intelligence’s Game Playing Challenge

It’s been a while since I enrolled myself for Udacity’s Nanodegree on Artificial Intelligence (which I genuinely rate above all the online learning experiences I have had). Amidst studying about ‘game playing agents’ during the coursework, one of the assignments was to summarize a research paper, for which I read about one of the most crucial breakthroughs in the history of Artificial Intelligence, Deep Blue.

Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computing machine to have won a chess match against a reigning world champion under regular time controls.

When IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997 in a six-game chess match, Kasparov came to believe that he was facing a machine that could experience human intuition.

“The machine refused to move to a position that had a decisive short-term advantage… It was showing a very human sense of danger.” – Garry Kasparov

To Kasparov, Deep Blue looked as if to be experiencing the game rather than just crunching the numbers. Might Kasparov have detected a hint of analogical thinking in Deep Blue’s play and mistaken it for human intervention?

“Chess is beautiful enough to waste your life for” – Hans Rees, Dutch Grandmaster.

The oft-quoted adage of Hans Rees most succinctly describes the human obsession with the ancient game of kings. For centuries, the act of playing chess has been upheld as the very paragon of intellectual activity. It is the game’s reputation as both a strategically deep system and as a thinking man’s activity that originally made the idea of a mechanized chess player and intriguing notion. For much of modern history, chess playing was seen as a “litmus test” of the ability for computers to act intelligently.


In 1770, the Hungarian inventor Wolfgang von Kempelen unveiled ‘The Turk,’ a fake chess-playing machine. Although the actual machine worked by hiding a human chess player inside of it and play the machine’s moves, audiences around the world were fascinated by the idea of a machine that could perform intelligent tasks at the same level as humans.

With the advent of computers in the 1940s, researchers and hobbyists began to make the first serious attempts at creating an intelligent chess-playing machine. In 1950, Claude Shannon published a groundbreaking paper entitled “Programming a Computer for Playing Chess” which served as an inspiration to generations of chess programmers. It was about this time in the United States that the growing field of Artificial Intelligence looked ready to burst forth with revolutionary thinking machines. Such was the enthusiasm of the time, Herbert Simon & Allen Newell in 1958 suggested that “Within ten years, a digital computer will be the world’s chess champion unless the rules bar it from the competition.”

In this great era of exploration, researchers were looking for model problems to develop new AI techniques with that, they hoped, would eventually lead to a computational model of conscious decision making – the true artificial (general) intelligence, the world is yet to witness. Shannon’s paper made a case for the use of chess as a model system to be employed for this purpose.

“The investigation of a chess-playing program is intended to develop techniques that can be used for more practical applications. …The chess machine is an idea one to start with for several reasons. The problem is sharply defined, both in the allowed operation and the ultimate goal. It is neither so simple as to be trivial or too difficult for satisfactory solution. And such a machine could be pitted against a human opponent, giving a clear measure of the machine’s ability in this kind of reasoning.” – Claude Shannon (Source)

As was the case with many subfields of Artificial Intelligence at this time, progress in the development of chess-playing hardware lagged behind the theoretical frameworks developed in the 60s and 70s. The public was doubtful that a machine would ever be able to defeat a competent human chess player. In the mid-1990s, however, the tides began to change. When Deep Thought became the first computer to beat a grandmaster in a tournament game, IBM realized that this was a way to illustrate the advances in technology. Despite the lingering skepticism of the chess community, chess-playing computers began to beat extremely proficient chess players in exhibition matches, the most notable victory being that of IBM’s Deep Blue against Chess Master Garry Kasparov in an official competition under tournament regulations. Deep Blue became the first computer to defeat a world chess champion in match play.

It is almost 20 years since, and computers have firmly cemented their lead over humans – Garry Kasparov is still considered to one of the greatest to have graced the game. Since the landmark victory, chess-playing computer programs have built upon Deep Blue’s developments to become even more skilled and efficient. Today, one can run chess programs far more advanced than Deep Blue on a standard desktop or laptop computer. It is fascinating to note how perceptions have changed over time. So, nowadays, people think computers are in fact super-humans, and no one realistically expects a human to beat a computer in the game of chess.

Understanding Deep Blue

As many stories and controversies exist about the 1997 match of Kasparov being cheated or IBM being unethical, I do not wish to cover those in this article as I want to focus on the science behind the development of the system, having already stated the revolutionary impact made by Deep Blue. The whole event can be seen as a classic plotline of ‘man vs. machine.’ However, behind the contest was important computer science, pushing forward the ability of computers to handle the kinds of complex calculations which further helped set up the base for applications like medical drug discovery; financial modeling; handling of large database searches; and performing massive calculations needed in many fields of science.

After rallying to beat Deep Blue in his first encounter (1996), winning three matches and drawing two after his initial loss, Kasparov wasn’t ready to give up on the human race. He later explained, in an essay for TIME, that Deep Blue, flummoxed him in the first game by making a move with no immediate material advantage; nudging a pawn into a position where it could be easily captured.

“It was a wonderful and an extremely human move. …I had played a lot of computers but had never experienced anything like this. I could feel – I could smell – a new kind of intelligence across the table.” – Garry Kasparov

This apparent humanness threw him for a loop only to later discover the truth: Deep Blue’s calculation speed was so advanced that, unlike other computers Kasparov had battled before, this one could see the material advantage of losing a pawn even if the advantage came many moves later.

In this section, I’d try my best to explain a few concepts that Deep Blue incorporated along with a general summary of the seminal research paper by IBM Watson Research Centre Team. To get the most out this section, it is essential to understand how computers play turn-based strategy games (ranging from tic-tac-toe to chess). This video impeccably summarizes the general game-playing concept.

In terms of Game Theory, chess is a game of perfect information. Everything about the state of the board in an instant is known by both players. If one player makes a brilliant move; it is because the opponent didn’t foresee it in time to counter it. From any position in the game, there are a finite number of moves that any player can make, and a finite number of moves his opponent may make in return. Thus, from the starting position, there is a widely branching game tree that represents all possible moves and counter-moves that can be played. The approach that nearly all chess programs take to find the move to make in any specific position is to search this tree of moves and countermoves while applying various heuristics to evaluate each position. They then choose the first in a sequence of moves that results in the best position for the program (if it is assumed that both sides play perfectly, according to these heuristics).

If confused, a game tree is a directed graph whose nodes represent the positions in a game and edges represent the moves. The complete game tree for a game is the game tree starting at the initial position, containing all possible moves from each position.

How many nodes will the tree have? There is a popular fact that the number of games of chess is greater than the number of atoms there are in the observable universe. Claude Shannon, in his paper, came with an estimate for this number to be around 10-the-power-120. Which is… well, massive! If we compare this to the number of atoms in the observable universe, which are about 10-to-the-80, we could assign billions of games of chess for each atom in the universe. How did he come up with such a huge number? Well, on average, at any position, there are about thirty moves that can be made. In a game of around 40 moves, this would be somewhere around 30-to-the-power-40 which is roughly around 10-to-the-power-120. He did this rough estimate to show that if you had a computer and it was trying to work out the then future of the game, it’d take millions of years for it to make one single move. More on this can be learned in this fantastic video from one of my favorite YouTube channels, numberphile.

Any chess program, at its heart basically reduces to a tree search problem. The game tree for games like tic-tac-toe is easily searchable, but the complete game trees for larger games like chess are much too large for feasible search. The challenge is to provide the right intelligence to the computer since we cannot look forward all the way to the end of the game and see if a particular move will win. We must create a function which takes in a state of the game and boils it down to a real-number evaluation of the state. This heuristic calculating tool is called the Evaluation Function. For example, the function could give higher scores to board states in which the player of interest has more pieces than his opponent.

Brute Force was an important part of computers in the 80s. The faster the computer, the better it was. At that point in computer chess, researchers would want as much speed as possible. It was expected that computers would make foolish moves and they would not understand the strategy; they did not have the knowledge, they did not have a style of playing.

The first working chess programs appeared in the late 1950s. The very first of these were developed at the Los Alamos by Paul Stein and Stanislaw Ulamas a way to test the new mainframe, MANIAC I that the lab had just received. They played a paired down version of the game, played on a 6×6 board. The MANIAC ran at 11,000 instructions per second (making it several million times slower than a typical desktop machine today). Its level of play was below satisfactory. There clearly was a dire need for a strategy. The rough idea being that a chess-playing program searches a partial game tree: typically, as many moves from the current position as it can search in the time available; an idea referred to as Iterative Deepening.

Shannon’s paper first put forth the idea of a function for evaluating the efficacy of a particular move and a ‘minimax’ algorithm which took advantage of this evaluation function by taking into account the effectiveness of future moves that would be made available by any particular move. This work provided the framework for all future research in computer chess playing. The thing with minimax is that you end up making the best possible play you could make (as bad as possible), and you are trying to do the same with your opponent. Therefore, a chess AI is not seeing forward in time to a win; it is seeing forward to a certain distance to see if it is a good position to be in.

A popular optimization of minimax is known as Alpha-beta pruning, wherein any move for which another move has already been discovered that is guaranteed to do better than it is eliminated. For example, in a given tree we do not need to explore any of the paths if we’ve already found moves we know will perform better.

Check out World Science Federation’s interesting discussion with computer scientist Murray Campbell, and grandmaster Joel Benjamin, two key members of IBM’s Deep Blue team. It gives an interesting insight of how the thought-process for development of Deep Blue went through.

Now that we understand the core concepts behind a chess engine (for a more detailed overview, I highly recommend going through these slides), it is important to understand what went behind Deep Blue that its predecessors could not manage the same feat as Deep Blue did. Although it inherited a lot of features from its previous counterparts; nonetheless, several improvements were made to make Deep Blue competitive. The true strength of Deep Blue was in its architecture apart from the algorithmic strategies mentioned earlier; as I describe in the subsequent paragraphs.

In comparison to Deep Blue I (the computer that lost to Garry Kasparov in 1996), Deep Blue II incorporated a single-chip chess search engine with a more complex evaluation function (with an increased number of features) that was nearly 1.5 times faster than its predecessor. It was a massively parallel system, with around 500 processors available to participate in the game tree searches. The system included significant search extensions with non-uniform searches so that it could search to a reasonable minimum depth in the game tree. The gist of how it worked is as follows: a master chip searched the top levels of the game tree and then distributed “leaf” positions to the workers who perform a few levels of additional search, and then distribute their leaf positions to the chess chips which search the last available levels of the tree.

The search mechanism of Deep Blue was a hybrid software/hardware search. The software search was flexible and could change as needed whereas the hardware search while inflexible, was faster. To maintain a balance between the speed of the hardware search and the efficiency/complexity of the software search, the chips only carried out shallow searches.

The chess chip was divided into three parts: the move generator, the evaluation function, and the search control. The move generator, an 8×8 array of combinatorial logic (representing a chessboard) was controlled via a hardwired finite state machine. The move generator computed all possible moves. In order to generate moves with minimum latency and the first move being as close the best possible (to make search process efficient), the evaluation function of Deep Blue composed of “slow-evaluation” and “fast-evaluation.” The features recognized in both evaluation functions had programmable weights for easy adjustment of their relative importance. The overall evaluation function was the sum of the feature values. Search Control monitored the quality of search by ensuring ‘progress’ with the inclusion of components like a repetition detector. Evaluation function for a game playing agent is one of the most crucial aspects of its performance. The paper further goes into more details of the features that constituted a very complex evaluation function of Deep Blue, describing the heuristics used to score a particular game state. The game agent used ideas such as quiescence search, iterative deepening, transposition tables, and NegaScout (which essentially follow the same idea as generic tree search techniques discussed earlier). Deep Blue also had two important databases – the opening book which was chosen to emphasize on the positions that Deep Blue played well and a large extended book that allows a large grandmaster game database to influence Deep Blue’s play, particularly during endgames.


The authors conclude their research listing out areas for additional improvement that could have otherwise resulted in better or worse results. This seminal paper is a great inspiration to understand the many possibilities that are available for creating a compelling game agent.

“In the course of the development of Deep Blue, there were many design decisions that had to be made. We made particular choices, but there were many alternatives that were left unexplored.”

As computers began to outclass human chess players clearly, there was a little point in continuing to pitch them against each other. As a result, there are now computer-only chess leagues, where the top chess programs play against each other. The CCRL is probably the most detailed/involved of such leagues, but there’s also the IPON and CEGT too. As far as crowning some winner goes, the Thoresen Chess Engines Competition (TCEC) is regarded by some as the de-facto computer chess championship.

A fascinating article on how a 2006 chess engine running on i7 is beaten by a 2014 engine running on 50-times slower hardware, proves how much the software has advanced. If you could argue in the early days that computers are just doing dumb brute force searching, you absolutely cannot claim that now!

If you’d like to celebrate a day when the computers of science-fiction became modern-day reality, however, check out the video below, or visit IBM’s Deep Blue Tumblr.

The success of Deep Blue has had a significant impact on the culture of science & technology. From today’s more efficient chess software to a computer that can play ‘Jeopardy!’, Deep Blue has influenced the way we play. Deep Blue reset expectations for what was possible with a computer, setting the stage for IBM’s Jeopardy champion Watson and its new goal to simulate the entire human brain.

Leave a reply:

Your email address will not be published.

Site Footer

Insert math as
Additional settings
Formula color
Text color
Type math using LaTeX
Nothing to preview