[Humanist] 29.811 AI and Go

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Mar 25 08:33:29 CET 2016


                 Humanist Discussion Group, Vol. 29, No. 811.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (25)
        Subject: AI, Go and the human

  [2]   From:    Stephen Doig <steve.doig at asu.edu>                         (14)
        Subject: Re:  29.808 AI and Go

  [3]   From:    Tim Smithers <tim.smithers at cantab.net>                   (120)
        Subject: Re:  29.808 AI and Go


--[1]------------------------------------------------------------------------
        Date: Thu, 24 Mar 2016 07:00:50 +0000
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: AI, Go and the human


Thanks to Gabriel Egan for the anecdotes. My own goes back to the arrival in
Toronto of an Ibycus machine, which David Packard designed and built to
process the Greek texts of the Thesaurus Linguae Graecae. At the time I was
working on words for "mirror" in Greek and Latin. Just before its arrival I
had spent all day every day for two weeks in Robarts Library going through
every Greek concordance I could find for occurrences of katoptron, esoptron
and enoptron and recording the contexts. (A very kind person at the
Thesaurus Linguae Latinae had copied out by hand and typed up all citations
for the Latin "speculum" from the TLL card-index file.) When the machine
arrived, I plugged it in, set it up, inserted the TLG CD (or CDs?) and did
the searches. It took me 45 minutes to produce a much longer list than I had
just finished by hand. QED.

Again, what seems most interesting to me is the fact of this discussion. Why
are we having it? (An answer would be, so that we can discover what it 
means to play a game, or proverbially, that it's not all about, or not at all 
about winning.) Why are we assuming that human intelligence is fixed in
its power, focus, nature and so forth? True, we are unlikely to run much
faster than we already do. But history and anthropology (and all the rest)
have quite a lot to say about what humans are and have been capable of
otherwise. And there is so much to learn from the *analogy* between the
human digital artefact and the human.

Yours,
WM
--
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Digital Humanities Research
Group, Western Sydney University



--[2]------------------------------------------------------------------------
        Date: Thu, 24 Mar 2016 15:10:53 +0000
        From: Stephen Doig <steve.doig at asu.edu>
        Subject: Re:  29.808 AI and Go
        In-Reply-To: <20160324063450.E5B306767 at digitalhumanities.org>


Exactly! I appreciate the distinction that has been drawn here between
"playing Go" and "doing Go”, but I don’t see that it matters when we
ponder the future impact of AI. If an AI can pass a wide-ranging Turing
test someday, I’m not sure it matters that the process the AI uses to
converse and interact is markedly different than the process used by the
human brain to do the same thing. The output is what matters.

Steve Doig 
********************************************************
Stephen K. Doig, Knight Chair in Journalism
Cronkite School of Journalism, Arizona State University
555 N. Central Ave., Suite 302, Phoenix, AZ, 85004-1248
Phone: 602-496-5798    Fax: 602-496-7041
Web -- http://cronkite.asu.edu/faculty/doigbio.php
********************************************************



--[3]------------------------------------------------------------------------
        Date: Thu, 24 Mar 2016 19:10:44 +0100
        From: Tim Smithers <tim.smithers at cantab.net>
        Subject: Re:  29.808 AI and Go
        In-Reply-To: <20160324063450.E5B306767 at digitalhumanities.org>


Dear Gabriel and other Humanists,

It's interesting how varied the human reading of texts can be.
Do machines, when they "read" texts, of human making or
machine making, also display such interesting and illuminating
differences?

If you can chose the criteria by which winning is decided it's
often easy to arrange for your favourite to come out on top:
steam drill; Porsche motor car; brute-force search of quotes;
Google translate (?) [1]; AlphaGo; etc.

What I think is interesting about this game--of pick the
criterion to get the wanted winner--is that usually the set of
criteria used has to be small, sometimes just one.

But it's the other game have I tried to point to that I think
is the more interesting, and the more important here:
identifying well what are the criteria to be properly used in
judging between some human performance and some (sufficiently)
comparable machine performance.  Playing this game well is
worth it, I think, because it tells us more about what being
human is about.  Not because we humans are somehow always
better than the machines we build, we may not be, but because
understanding ourselves better is something we are usefully
curious about.  Building machines has long been a way to do
this studying of ourselves.  AI is a science that embodies
this practice: it seeks to understand intelligent behaviour by
trying to create it and study it in the artificial.  It is, to
use Hebert Simon's name, a science of the artificial [2].

As they are shown to surpass even the best humans, we should
delight in what our AI machines can do.  Their real value,
however, is in how their development can help us better
understand ourselves.  This is a job the Arts and Humanities
should press, and join with, the scientists and engineers to
do well.

Best regards,

Tim

Notes

[1] Gabriel: I suspect you've not used Google translate
    much to do real translation work.  Even to
    non-professional translators it generally does a poor job.
    To compensate, one might feel, it often does a hilarious
    job--try it on idioms--but it doesn't appreciate the humor
    it generates.  That _would_be_ impressive.

    Professional translator friends describe Google translate
    as producing (sometimes) useful approximate
    language-to-language text transformations.  I suspect
    Edward Seidensticker, the translator of Yasunari
    Kawabata's "The Master of Go," would think the same, had
    he lived to see Google translate.

    And, if I had the money, I'd be willing to bet a large sum
    that no amount of statistical learning, run over any
    amounts of human made Japanese-to-English text
    translations, will ever build you a translation machine
    the like of Edward Seidensticker.  Good human judgement is
    not of a statistical kind.  Something that AlphaGo's win
    over Lee Sedol confirms.

[2] Herbert Simon, 1969: The Sciences of the Artificial,
    MIT Press, Cambridge, Mass, first edition.  (Enlarged 2nd
    edition, 1981, MIT Press.  3rd edition, 1996, MIT Press.)

> On 24 Mar 2016, at 07:34, Humanist Discussion Group <willard.mccarty at mccarty.org.uk> wrote:
> 
>                 Humanist Discussion Group, Vol. 29, No. 808.
>            Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
> 
> 
> 
>        Date: Wed, 23 Mar 2016 11:06:10 +0000
>        From: Gabriel Egan <mail at gabrielegan.com>
>        Subject: Re: [Humanist] 29.802 AI and Go
>        In-Reply-To: <20160323063959.6F5B36704 at digitalhumanities.org>
> 
> 
> Ken Friedman's contribution to the debate about AlphaGo helpfully 
> mentions the folk song about the competition between a human spike 
> driver on the railroad and a steam-driven machine. This seems apposite 
> to me because that physical activity isn't surrounded by the aura of 
> consciousness that easily obscures what happens when machines compete 
> with humans. The force of the analogy seems to me to work in the 
> opposite direction to Friedman's claim about it. That is, although the 
> machine lost on that occasion, I don't think anyone would claim that 
> humans are better at driving spikes than machines are. We conceded that 
> point long ago. And the man died, of course.
> 
> Two anecdotal analogies also seem apposite to me:
> 
> 1) In the mid-1980s I was witness to a heated debate at work between the 
> middle-aged owner of a Porsche motor car and a young man who was keen on 
> running. The matter of dispute was which of them, man or car, would be 
> faster over a 100 meter run from a standing start. (Actually, it was 
> what we used to call in England 'the hundred yard dash', but that's 
> almost the same as 100 meters.) They decided to settle it by a race in 
> the car park. It was a close thing, but the young man won by a whisker. 
> But everyone present reflected that after 100 meters the young man was 
> at the end of his endurance and about to flop, while the motor car was 
> still in second gear and could have accelerated steadily over the next 
> 500 meters if there were more room in the car park.
> 
> 2) In 1993 two graduate students (myself and Peter White, who went on to 
> manage EEBO for ProQuest) had a dispute about the speed and accuracy of 
> a hand-made concordance to Shakespeare's works versus a brute-force 
> search (on an Intel 80286-driven PC) of a collection of Shakespeare 
> e-texts. Under the supervision of an impartial referee we ran a race 
> over 10 quotations chosen at random, and White with his copy of 
> Bartlett's concordance won every time. But as White himself reflected, 
> the manual method wasn't ever going to get any faster, while with my 
> next grant cheque I was planning to buy an Intel 80386-driven PC that 
> would improve the machine's chances and so inevitably it would be the 
> better method.
> 
> Ultimately, no-one should care how the machine out-performs the human. 
> Google Translate does not do translation the way a person would: it uses 
> statistical inferences drawn from a vast body of existing translations 
> (eg from the United Nations' large corpus of 6-language translations of 
> its official documents). Human translators of languages should not draw 
> much comfort from the fact that they are doing the job in a different way.
> 
> Regards
> 
> Gabriel Egan
> Centre for Textual Studies




More information about the Humanist mailing list