[Humanist] 23.607 why chess for AI

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Jan 29 07:31:08 CET 2010

                 Humanist Discussion Group, Vol. 23, No. 607.
         Centre for Computing in the Humanities, King's College London
                Submit to: humanist at lists.digitalhumanities.org

        Date: Thu, 28 Jan 2010 11:21:02 -0600
        From: amsler at cs.utexas.edu
        Subject: Re: [Humanist] 23.597 why chess for AI
        In-Reply-To: <20100126063420.43BAC49297 at woodward.joyent.us>

Chess has an interesting history in AI. It was more a case of trying  
to explore the limits of reasoning by machine than anything else--or  
at least forcing opponents of the idea that machines could "think" to  
clarify what they meant.

Most famously, there was a gauntlet thrown down by those who said, "A  
computer could never defeat a human at chess"; later refined to "A  
computer could never defeat a grand master at chess". It became a  
demonstration task for the computer folks. However, as with many other  
tasks, the solution revealed that computers and human beings are  

At first the task was simply to build computer programs that could  
play chess, i.e., make legal moves. This became a standard artificial  
intelligence graduate course class assignment by the late 1960s, to  
teach students about game playing techniques. Once computers played  
"legal" chess there were a variety of approaches to improving their  
game. Some pursued modeling the strategies of human chess experts.  
However, it soon became apparent that there was another strategy  
available through brute force. I.e., the computer could simply look  
ahead to see what the consequences of EVERY move were. And then, the  
consequences of every countermove by its opponent, and the  
consequences of every counter-countermove to those moves.... all the  
way through to a checkmate. Once this strategy was determined to be  
solely limited by the computing capacity of the machine it was  
apparent that a computer could eventually beat human players--all that  
had to be done was to build it. Additional criteria were added to the  
challenge, such as requiring the machine to make its moves in real  
time, in a real chess match.

Frankly, it seemed a bit silly because it was very much like  
questioning whether a locomotive could outrace a horse and anyone  
claiming that it would never happen clearly didn't understand how  
engineering works.

So, what finally happened is that computer folks decided to build  
special purpose hardware to play chess. This is always available as an  
option when general purpose computing is too slow. (However,  
surprisingly, it is often a short-lived necessity, as general purpose  
computers eventually overtake the capability of the  
special-hardware--which cannot economically continue to be developed  
since it only does one task). The "chess machine" was created with the  
sole purpose of playing chess using look-ahead to greater depths than  
any human could reach.

An interesting sidenote was that once all these circumstances were set  
up, i.e., computer engineers figured out how to build and run special  
hardware to play chess in real time against real chess masters, the  
human chess masters began to interpret the machine's performance as  
evidence of it having a personality. I suppose this was an integral  
part of their strategy in playing against human chess players that  
gave them a means to see further ahead by guessing what moves their  
opponent would favor---but I can't but help find it misplaced  
inference when used against a machine. True, the problem of being able  
to see all the way to the end of the game was too difficult for even  
the best hardware of the day, so effort was put into improving the  
software to not have the computer waste its time with analysis of  
legal but dumb moves that no chess expert would execute, but the  
principle that this was a finite game that had a computable end really  
meant eventually there was no doubt of the computer's eventual mastery  
of the problem and no reason to believe in "personality" being a factor.

So, when it became apparent that all but grand masters could easily be  
beaten by a computer playing chess, it became a matter of finding a  
use for this capability. The obvious answer was to market a  
chess-playing computer for human players to test their skill  
against--and that was done, with graded levels of expertise so it  
could be played and beaten at each level.

I guess the humanist lesson is that one shouldn't assume all tasks  
performed by human beings REQUIRE "thinking", just because we use  
"thinking" to do them.

The computer that plays chess well doesn't "think" any more than the  
toaster-oven that cooks food using a timer to shut off when done  
"thinks" about the food being done. "Thinking" is thus a rather vague  
term that has swept up a number of tasks humans perform that could  
readily be done without "thought" at all.

The questions that remain are still challenging. How can we decide  
when a task cannot be performed by "reasoning" alone. In hindsight, I  
believe we've gotten considerably more sophisticated in our  
understanding of the boundary between "reasoning" and "thinking". Of  
course, the mathematical understanding of game theory and the  
realization that many 'games' are completely solvable by reasoning  
alone, advanced our understanding of where the boundary was.

The movie "Wargames" is instructive. In it a computer programmed to  
fight a nuclear war is hacked into by a kid thinking it's a harmless  
game-playing machine with this novel game called "thermonuclear war".  
The computer doesn't know or care that it's about to launch a real  
nuclear war using the hardware it controls. In the end, it runs a  
massive number of scenarios in terms of the outcome and (in a  
Hollywood ending) "learns" that "thermonuclear war" is an odd game,  
because as it says, "The only way to win is not to play" and then  
suggests a nice game of chess instead. Learning of this kind is still  
a mystery in AI. I.e., recasting the problem intitially given as an  
instance of a high-level concept.

More information about the Humanist mailing list