[Humanist] 29.800 AI and Go

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Tue Mar 22 07:16:27 CET 2016


                 Humanist Discussion Group, Vol. 29, No. 800.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Mon, 21 Mar 2016 20:33:32 +0100
        From: Tim Smithers <tim.smithers at cantab.net>
        Subject: Re:  29.790 AI and Go
        In-Reply-To: <20160318063858.E932E2CA2 at digitalhumanities.org>


Dear Gabriel and Willard,

Thank you both for your respective responses to my earlier
AlphaGo vs Lee Sedol post.  Your thoughts on this are both
nice to see.

Gabriel: I prefer more detail and precision in how we talk
about this--that my errors and flaws may be the easier to
identify and correct.

Just making moves in Go that satisfy the rules of the game
does not, in and of itself, constitute what we mean by playing
Go.  And it certainly doesn't win games, not interesting ones.

Legal moves are a necessary but not sufficient part of playing
Go.  It's the same in most games.  Indeed, I'd say all games.
The fun, the challenge, the victory does not reside in merely
knowing how to make legal moves.

AlphaGo did win the Go match against Lee Sedol.  I didn't want
to suggest otherwise.  But I do want to say that winning by
making a better sequences of moves against the opponent is not
all that playing involves, not in human play.  If it were so,
I doubt many humans would bother playing games.  More to the
point, I doubt humans would be able to play games if playing
them involved only trying to win.  Human game playing engages
many different (human) capacities, and, I would say, in
practice, necessarily so: learning from other humans; talking
about it; reading about it; explaining how to play well, and
how not to; listening to others and understanding them when
they talk about how they play; looking at, understanding,
discussing, learning from the games of others, including those
that involve machines; studying the history of the game, and
the people who play and have played it; and more.

My claim is that AlphaGo displays too few of these capacities
for it to be fair to use the same word to talk about what it
does and what Lee Sedol does as a player of Go.

If it aids the conversation, rather than AlphaGo does and Lee
Sedol plays, we might have two words to distinguish the two
different kinds of playing here: h-play for human play, as Lee
Sedol does; and m-play for machine play, as AplhaGo does.

It is interesting to try to build machines that can compete
with the best human players in games like Go and Chess.  It's
impressive when they win, and certainly no mean achievement.
What I don't think is warranted is to then claim, or take it
that, these machines play these games as humans do, albeit by
different means.

It is interesting that it's possible for a machine to win at a
difficult game against the best human practitioners, yet not
be a player of the game like humans are.  This was not the
expectation of even a few decades ago, at the beginning of AI.
It was, I think, believed by most people then that if a
machine could be built to play human level Chess, it would
need to play Chess much like the humans play.  This is what
most people still believe, I think.  A belief reinforced by
describing machines like Deep Blue and AlphaGo as Chess
playing and Go playing, respectively, without remarking upon
the gross differences in what the word "playing" can sensibly
mean in each case here.

Willard's question, following Fox Keller, is, I think, key.
When does as-if become is?; when does m-playing become
h-playing?  This is the productive question here.  But, if, at
the first sight of a machine beating one of the best humans at
Go, we say AlphaGo plays Go as Lee Sedol plays Go, we close
this question and remove the possibility of further
investigation, insight, and understanding of the differences.
There can be no refocusing to "see down into different depths
through the great millstone of the world," as Maxwell put it
[1].  This would be an untypically human thing to do, I think:
curiosity strongly marks human intelligence, but not that of
machines like AlphaGo and Deep Blue.

Alan Turing proposed a way to answer a "when is as-if the same
as is" question: when is m-intelligence the same as
h-intelligence.  He proposed it would be when a human could
not tell the difference between the as-if and the is.

There is, however, a curious, and little remarked upon,
asymmetry in Turing's test.  The human tester decides if the
machine performance is indistinguishable from the human
performance, but the machine doesn't get to decide if the
human performance is indistinguishable from the machine
performance.  Yet, for real human-machine equivalence, this is
required.  Just taking the human judgement here is unfair on
humans and machines, and serves to close human minds to the
real and interesting question: how are we intelligent beings.
Human intelligence is, I maintain, little like the
intelligence of the automated decision making machines we now
have that can beat the very best of us at Chess and Go, and do
a better job on other kinds of complex decision making tasks.
So different, I want to press, that we should take care to use
different ways to talk about these difference kinds of
intelligence.

Now is not the moment to say much on knowledge, but AI has
given us what I think is the most practical notion of
knowledge we currently have.  Allen Newel--one of the
"Fathers" of AI, who worked closely with Hebert Simon--in his
1980 paper The Knowledge Level [2], introduced the notion of
knowledge as a capacity for rational action.  This both
escapes the difficulties with the classical notion of
knowledge as justified true believe (JTB), and does a better
job of accounting for knowledgeable behaviour in humans and
machines than any of the many and varied attempts made to
re-build a consensus about what knowledge is after Edmund
Gettier’s effective criticism of JTB in 1963.

With Newell's notion of knowledge, we can say that Lee Sedol
knows much more about playing Go than AlphaGo does, despite
having lost his match with AlphaGo.  So much more, I would
say, that it's not useful to describe them both as players of
Go, as we might describe two humans as players of Go.

Best regards,

Tim

Notes

[1] "The dimmed outlines of phenomenal things all merge
    into one another unless we put on the focusing-glass of
    theory, and screw it up sometimes to one pitch of
    definition and sometimes to another, so as to see down
    into different depths through the great millstone of the
    world."  -- James Clerk Maxwell
 
    From: Are There Real Analogies in Nature?'  (Feb 1856).
    Quoted in Lewis Campbell and William Garnett The Life of
    James Clerk Maxwell (1882), p 237.

[2] Allen Newell, 1981.  The Knowledge Level, first
    Presidential Address to the American Society of Artificial
    Intelligence (as it was then called).
     http://www.aaai.org/ojs/index.php/aimagazine/article/view/99

> On 18 Mar 2016, at 07:38, Humanist Discussion Group <willard.mccarty at mccarty.org.uk> wrote:
> 
>                 Humanist Discussion Group, Vol. 29, No. 790.
>            Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
> 
>  [1]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (20)
>        Subject: AI, Go, winning & reactions
> 
>  [2]   From:    Gabriel Egan <mail at gabrielegan.com>                       (58)
>        Subject: Re: [Humanist] 29.786 AI and Go
> 
> 
> --[1]------------------------------------------------------------------------
>        Date: Thu, 17 Mar 2016 06:35:49 +0000
>        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
>        Subject: AI, Go, winning & reactions
> 
> 
> I am fascinated by our reactions to the competitive performances of 
> machines, or rather, to performances by machines taken as competitive 
> with humans. We can ameliorate the situation by noting that humans *play*, 
> machines *do*, as Tim wrote yesterday. But our reactions are priceless. I 
> particularly like Evelyn Fox Keller's question in one of her articles on 
> artificial life: how close to the edge of as-if are we willing to go? 
> And I wonder, is there an edge? Or, to paraphrase Maxwell, as we refocus 
> our instrument and see down through the millstone of the world, do we 
> ever reach the absolute point where as-if becomes is? The reactions to 
> the question of the human stirred up by scientific discoveries have a 
> long history. They suggest each is a challenge to a new becoming. Each 
> is powered by fear that it won't happen.
> 
> And that's my hobby-horse, here and gone.
> 
> Comments?
> 
> Yours,
> WM
> -- 
> Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
> Humanities, King's College London, and Digital Humanities Research
> Group, Western Sydney University
> 
> 
> --[2]------------------------------------------------------------------------
>        Date: Thu, 17 Mar 2016 09:43:42 +0000
>        From: Gabriel Egan <mail at gabrielegan.com>
>        Subject: Re: [Humanist] 29.786 AI and Go
>        In-Reply-To: <20160317062549.86E6E1218 at digitalhumanities.org>
> 
> Dear Humanists
> 
> I must respectfully disagree with Tim Smithers'
> characterization of the victory of AlphaGo
> over Lee Sodal at the game of Go.
> 
> It may well be that AlphaGo arrives at its
> choices of moves in a different way from Sodal,
> but that cannot fairly be characterized as "AlphaGo
> ... doesn't play Go". Clearly, it not only plays
> Go--it follows the agreed rules and attempts
> to achieve what the rules define as success--
> but it also wins at Go.
> 
> Had the human won the contest, I don't think
> we would entertain a claim by the machine's
> supporters that Sodal didn't really win because
> he played Go differently from the machine. Of
> course they play differently: one of them plays
> much better than the other, that's the essential
> difference between them. The rules of Go do
> not prescribe the means of winning, only what
> count as valid moves and what counts as winning.
> 
> All players are free to turn those rules into
> strategies. To say that for a victory to count
> the machine must think the way that a human thinks
> sounds like changing the rules after the contest
> is over, and furthermore it supposes that we
> know how the human thinks, and I submit that
> in fact we do not know this.
> 
> That Sodal can talk about the game and the machine
> cannot is no more to the point than the fact that
> the machine emits electromagnetic radiation (that
> can be picked up by a nearby radio) and Sodal
> cannot. If we defined either ability--to speak
> or to interfere with radio sets--as part of
> the contest at the beginning, both sides would
> have gone about the contest quite differently.
> We cannot reasonably set that as part of the
> challenge only after one side has lost.
> 
> The claim that the machine "doesn't know it competed
> in a game of Go" only make sense if we agree on what
> it means to "know" something, and in fact experts
> on knowledge seem not agree to on what that means.
> 
> Non-experts can easily see how difficult it is
> to define "knowing". I have a close relative
> suffering from dementia who beat me at Scrabble
> last week, and she is unable to recall doing so:
> she doesn't "know" it in the sense of "knowing"
> that Smithers relies upon. To see why this is
> relevant, let us suppose that Sodal were for some
> reason unable to communicate. Would he thereby
> lack the "human" characteristics that Smithers
> thinks make all the difference between man and
> machine? That would be a very coarse judgement,
> but it follows from the way that Smithers seems
> to define "knowing".
> 
> Regards
> 
> Gabriel Egan
> Centre for Textual Studies
> De Montfort University






More information about the Humanist mailing list