[Humanist] 25.345 what do we have to teach the baby?

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Mon Oct 3 08:43:41 CEST 2011


                 Humanist Discussion Group, Vol. 25, No. 345.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Mon, 03 Oct 2011 07:39:37 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: like a baby

On 18 August IBM announced "a new generation of experimental computer 
chips designed to emulate the brain’s abilities for perception, action 
and cognition.... cognitive computers are expected to learn through 
experiences, find correlations, create hypotheses, and remember – and 
learn from – the outcomes, mimicking the brains structural and synaptic 
plasticity" (http://www-03.ibm.com/press/uk/en/pressrelease/35252.wss).

In the latest London Review of Books (33.19, 
http://www.lrb.co.uk/v33/n19/daniel-soar/it-knows), Daniel Soar, 
reviewing three books on Google, quotes Steven Levy (of Hackers, 
I assume) in his book In the Plex, as follows on the more than 200 
signals it uses in addition to PageRank:

> What every one of those signals is and how they are weighted is
> Google’s most precious trade secret, but the most useful signal of
> all is the least predictable: the behaviour of the person who types
> their query into the search box. A click on the third result counts
> as a vote that it ought to come higher. A ‘long click’ – when you
> select one of the results and don’t come back – is a stronger vote.
> To test a new version of its algorithm, Google releases it to a small
> subset of its users and measures its effectiveness through the
> pattern of their clicks: more happy surfers and it’s just got
> cleverer. We teach it while we think it’s teaching us. Levy tells the
> story of a new recruit with a long managerial background who asked
> Google’s senior vice-president of engineering, Alan Eustace, what
> systems Google had in place to improve its products. ‘He expected to
> hear about quality assurance teams and focus groups’ – the sort of
> set-up he was used to. ‘Instead Eustace explained that Google’s brain
> was like a baby’s, an omnivorous sponge that was always getting
> smarter from the information it soaked up.’ Like a baby, Google uses
> what it hears to learn about the workings of human language. The
> large number of people who search for ‘pictures of dogs’ and also
> ‘pictures of puppies’ tells Google that ‘puppy’ and ‘dog’ mean
> similar things, yet it also knows that people searching for ‘hot
> dogs’ get cross if they’re given instructions for ‘boiling puppies’.
> If Google misunderstands you, and delivers the wrong results, the
> fact that you’ll go back and rephrase your query, explaining what you
> mean, will help it get it right next time. Every search for
> information is itself a piece of information Google can learn from.

Like that old question, "how much information is there in the world?", 
the possibility that Google might sell the information it gathers about 
us is beside the point. (Soar catalogues what Google learns about us.) 
Google, he writes, isn't gathering information to sell. In 2007 Google 
released GOOG-411 in the U.S., a voice-driven service that would come 
back at you with the top 8 results for whatever you asked about.

> It soon became clear that what it was getting were demands for pizza
> spoken in every accent in the continental United States, along with
> questions about plumbers in Detroit and countless variations on the
> pronunciations of ‘Schenectady’, ‘Okefenokee’ and ‘Boca Raton’.
> GOOG-411, a Google researcher later wrote, was a phoneme-gathering
> operation, a way of improving voice recognition technology through
> massive data collection.

The service was dropped three years later, but by then, Soar writes, 
"The baby had learned to talk."

You can see where both Google and, on a smaller scale, IBM are going. 
"Distant reading" seems like quite small beer in comparison, and that's 
to understate massively. So a university professor wonders: what is the 
connection between what we're doing and where the world in this respect 
is going?

Comments?

Yours,
WM

-- 
Professor Willard McCarty, Department of Digital Humanities, King's
College London; Centre for Cultural Research, University of Western
Sydney; Editor, Interdisciplinary Science Reviews (www.isr-journal.org);
Editor, Humanist (www.digitalhumanities.org/humanist/); www.mccarty.org.uk/





More information about the Humanist mailing list