[Humanist] 29.790 AI and Go

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Mar 18 07:38:58 CET 2016


                 Humanist Discussion Group, Vol. 29, No. 790.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (20)
        Subject: AI, Go, winning & reactions

  [2]   From:    Gabriel Egan <mail at gabrielegan.com>                       (58)
        Subject: Re: [Humanist] 29.786 AI and Go


--[1]------------------------------------------------------------------------
        Date: Thu, 17 Mar 2016 06:35:49 +0000
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: AI, Go, winning & reactions


I am fascinated by our reactions to the competitive performances of 
machines, or rather, to performances by machines taken as competitive 
with humans. We can ameliorate the situation by noting that humans *play*, 
machines *do*, as Tim wrote yesterday. But our reactions are priceless. I 
particularly like Evelyn Fox Keller's question in one of her articles on 
artificial life: how close to the edge of as-if are we willing to go? 
And I wonder, is there an edge? Or, to paraphrase Maxwell, as we refocus 
our instrument and see down through the millstone of the world, do we 
ever reach the absolute point where as-if becomes is? The reactions to 
the question of the human stirred up by scientific discoveries have a 
long history. They suggest each is a challenge to a new becoming. Each 
is powered by fear that it won't happen.

And that's my hobby-horse, here and gone.

Comments?

Yours,
WM
-- 
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Digital Humanities Research
Group, Western Sydney University


--[2]------------------------------------------------------------------------
        Date: Thu, 17 Mar 2016 09:43:42 +0000
        From: Gabriel Egan <mail at gabrielegan.com>
        Subject: Re: [Humanist] 29.786 AI and Go
        In-Reply-To: <20160317062549.86E6E1218 at digitalhumanities.org>

Dear Humanists

I must respectfully disagree with Tim Smithers'
characterization of the victory of AlphaGo
over Lee Sodal at the game of Go.

It may well be that AlphaGo arrives at its
choices of moves in a different way from Sodal,
but that cannot fairly be characterized as "AlphaGo
... doesn't play Go". Clearly, it not only plays
Go--it follows the agreed rules and attempts
to achieve what the rules define as success--
but it also wins at Go.

Had the human won the contest, I don't think
we would entertain a claim by the machine's
supporters that Sodal didn't really win because
he played Go differently from the machine. Of
course they play differently: one of them plays
much better than the other, that's the essential
difference between them. The rules of Go do
not prescribe the means of winning, only what
count as valid moves and what counts as winning.

All players are free to turn those rules into
strategies. To say that for a victory to count
the machine must think the way that a human thinks
sounds like changing the rules after the contest
is over, and furthermore it supposes that we
know how the human thinks, and I submit that
in fact we do not know this.

That Sodal can talk about the game and the machine
cannot is no more to the point than the fact that
the machine emits electromagnetic radiation (that
can be picked up by a nearby radio) and Sodal
cannot. If we defined either ability--to speak
or to interfere with radio sets--as part of
the contest at the beginning, both sides would
have gone about the contest quite differently.
We cannot reasonably set that as part of the
challenge only after one side has lost.

The claim that the machine "doesn't know it competed
in a game of Go" only make sense if we agree on what
it means to "know" something, and in fact experts
on knowledge seem not agree to on what that means.

Non-experts can easily see how difficult it is
to define "knowing". I have a close relative
suffering from dementia who beat me at Scrabble
last week, and she is unable to recall doing so:
she doesn't "know" it in the sense of "knowing"
that Smithers relies upon. To see why this is
relevant, let us suppose that Sodal were for some
reason unable to communicate. Would he thereby
lack the "human" characteristics that Smithers
thinks make all the difference between man and
machine? That would be a very coarse judgement,
but it follows from the way that Smithers seems
to define "knowing".

Regards

Gabriel Egan
Centre for Textual Studies
De Montfort University





More information about the Humanist mailing list