[Humanist] 29.786 AI and Go

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Thu Mar 17 07:25:49 CET 2016


                 Humanist Discussion Group, Vol. 29, No. 786.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Wed, 16 Mar 2016 10:24:45 +0100
        From: Tim Smithers <tim.smithers at cantab.net>
        Subject: Re:  29.773 AI and Go
        In-Reply-To: <20160312080234.85646629F at digitalhumanities.org>


Dear Willard,

We now know that Lee Sedol lost to AlphaGo (1-4).  A clear
victory for the machine, but at least not the whitewash it
looked like becoming at 0-3.

This is an important achievement for Artificial Intelligence
(AI).  Though predicted, it happened sooner than most (in the
know) anticipated.  It stands as another demonstration of what
lots of fast computation organised in an appropriate way can
do.

The Economist's Showdown piece, like many other similar
reports of this event, affords an opportunity to attempt a
rebalancing of the outcome.  It's not yet another defeat for
humankind.  Nor is it yet another step along the way to
machines taking over from us humans.  Far from it.

It is, however, yet another example of how computational AI
achievements like this messes with the way we talk about what
humans do and what machines do, such that machines look at
least as good as humans, if not better, at doing things humans
do.

There are stark and strong differences between what Lee Sedol
did and what AlphaGo did.  You'd think that the human
qualities that Lee Sedol demonstrated, and AlphaGo didn't,
would be given more prominence, but they're not.  This is the
machine winning out against humanity.  This is what needs
re-balancing.

Lee Sedol can explain to anybody who asks what Go is, how it
is played, how he plays Go, including that there are
mysterious hardly utterable aspects to his way of playing, how
he felt about each game against AlphaGo, and he can teach
others how to play Go, comment usefully on their efforts, and
promote the game.  AlphaGo can do none of these things: it
doesn't know it competed in a game of Go; it displays no
appreciation of having won the match.  Yet, these things Lee
Sedol can do are all part of playing Go.  Aren't they?  Why do
we equate what AlphaGo does with what Lee Sedol does, and say
that they both play Go?  Lee does play Go.  AlphaGo just does
Go.  It doesn't play Go, not in the sense the word play has
when we use it to talk about what Lee Sedol does.  Yes,
AlphaGo does Go better than Lee Sedol, but it does not, and so
far cannot, play Go.  Playing, for humans, is not, and cannot
be just making the moves needed to win the game.

It was the same when Deep Blue (II) beat Garry Kasparov at
Chess, in May 1997.  Deep Blue did chess better than Kasparov,
but it didn't play Chess.  This is why Chess, as a game played
by Humans, has not been swept away and replaced by machine vs
machine competitions.  Humans still play Chess, and use Chess
machines to do this.  Machines still don't play Chess.  And
they're not on the way to doing this.  Nobody, as far as I
know, is try to build machines that can do this.

The Economist piece illustrates where this distorting word
game often starts.  It, like many other reports, explains how
there are so many many more Go "...  games that can be played"
due to the combinatorics of all the different legal sequences
of Go board configurations.  But, if you ask even an unskilled
(Human) player of Go, she will tell you that most of these
possible sequences of states of "play" would never happen in a
real game of Go: a game played by Humans.

Nobody, as far as I know, has attempted to estimate how many
of these theoretically possible sequences of Go plays form a
part of real Go games.  It is surely large.  Very large.  But
nowhere near as large as the (guessed at) 10**70 reported by
the Economist.  The challenge for AlphaGo, as it was for Deep
Blue in Chess, is to compute only those plays that are part of
a good competitive game of Go.  It's no mean feat that it's
shown it can do this, well enough at least to beat Lee Sedol.
But, AlphaGo can itself tell us nothing about how it did this,
nor anything about how it plays the games it won and the game
it lost: what was the difference?  It needs Dr Hassabis and
his colleagues to do this, after they have spent sufficient
time analysing what their machine did ...  and applying much
of their Human intelligence in this doing.

It remains the case that Lee Sedol plays Go, and does so very
well, and that AlphaGo only does Go, albeit well enough to
beat Lee.  What this shows, as Deep Blue did in the case of
Chess, is that sufficient and fast enough computational
decision making of the appropriate kind, can approximate and
surpass in performance, the knowledge, understanding,
intuitions, and emotions of the way Humans play Chess and now
Go.

As I said, this is no doubt an important achievement for AI,
but it hardly scratches the surface of Human intelligence and
of what being Human is.  What would be nice is if we Humans,
took a little more care to keep this clear and out front when
we talk about these Human vs Machines contests.

What would also be nice is if those who practice in the
Digital Humanities also took more care about the words they
use to talk about what their digital machines do.  Computers
do not read texts, not in the same sense that Humans read, for
example.  If Humanists don't look after the language they use,
why are the digital technology types going to do this?

As you can probably tell, this matter is a Hobby Horse of
mine.  I've had this horse for a long time.  It gets taken out
for exercise quite often.  I hope you don't mind me riding it
over your land this time.

Best regards,

Tim

> On 12 Mar 2016, at 09:02, Humanist Discussion Group <willard.mccarty at mccarty.org.uk> wrote:
> 
>                 Humanist Discussion Group, Vol. 29, No. 773.
>            Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
> 
> 
> 
>        Date: Fri, 11 Mar 2016 12:17:38 +0000
>        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
>        Subject: AI and Go
> 
> 
> Many here I expect will be interested in "Showdown", in The Economist 
> for 12 March, on the simulation of the East Asian board-game by DeepMind 
> (https://deepmind.com). The article may be found online at
> 
> http://www.economist.com/news/science-and-technology/21694540-win-or-lose-best-five-battle-contest-another-milestone?cid1=cust/ddnew/n/n/n/20160310n/owned/n/n/nwl/n/n/n/email&etear=dailydispatch
> 
> The author comments,
> 
>> The rules of Go are simple and minimal. The players are Black and
>> White, each provided with a bowl of stones of the appropriate colour.
>> Black starts. Players take turns to place a stone on any unoccupied
>> intersection of a 19x19 grid of vertical and horizontal lines. The
>> aim is to use the stones to claim territory....
>> 
>> This simplicity, though, is deceptive. In a truly simple game, like
>> noughts and crosses, every possible outcome, all the way to the end
>> of a game, can be calculated.... The most complex game to be “solved”
>> this way is draughts, in which around 1020 (a hundred billion
>> billion) different matches are possible. In 2007, after 18 years of
>> effort, researchers announced that they had come up with a provably
>> optimum strategy.
>> 
>> But a draughts board is only 8x8. A Go board's size means that the
>> number of games that can be played on it is enormous: a
>> rough-and-ready guess gives around 10**170. Analogies fail when
>> trying to describe such a number. It is nearly a hundred of orders of
>> magnitude more than the number of atoms in the observable universe,
>> which is somewhere in the region of 10**80. Any one of Go’s hundreds
>> of turns has about 250 possible legal moves, a number called the
>> branching factor. Choosing any of those will throw up another 250
>> possible moves, and so on until the game ends. As Demis Hassabis, one
>> of DeepMind's founders, observes, all this means that Go is
>> impervious to attack by mathematical brute force.
>> 
>> But there is more to the game’s difficulty than that. Though the
>> small board and comparatively restrictive rules of chess mean there
>> are only around 10**47 different possible games, and its branching
>> factor is only 35, that does, in practice, mean chess is also
>> unsolvable in the way that draughts has been solved. Instead, chess
>> programs filter their options as they go along, selecting
>> promising-looking moves and reserving their number-crunching prowess
>> for the simulation of the thousands of outcomes that flow from those
>> chosen few.....
>> 
>> Working out who is winning in Go is much harder, says Dr Hassabis. A
>> stone's value comes only from its location relative to the other
>> stones on the board, which changes with every move. ....
> 
> Yours,
> WM
> 
> -- 
> Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
> Humanities, King's College London, and Digital Humanities Research
> Group, Western Sydney University






More information about the Humanist mailing list