[Humanist] 23.337 the invisible middle

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Oct 2 14:57:56 CEST 2009


                 Humanist Discussion Group, Vol. 23, No. 337.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Fri, 02 Oct 2009 13:51:51 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: the invisible middle


Those here who are familiar with the history of work in artificial
intelligence will already know of Professor Sir James Lighthill's report on
the field commissioned by the U.K. Science Research Council in 1972,
basically for advice on what to do about supporting this work. (Lighthill
was at the time Lucasian Professor of Applied Mathematics at Cambridge, i.e.
Stephen Hawking's predecessor; his biography is in the DNB.) After receiving
this report, the Council put it together with a primary response from
Professor N. S. Sutherland (Experimental Psychology, Sussex) and
commentaries from Dr R. M. Needham and Professors H. C. Longuet-Higgins and
D. Michie. A fascinating document altogether. It was published as Artificial
Intelligence: a paper symposium (London: Science Research Council, 1973).

What's especially interesting and relevant to us is how Lighthill's
partitioning of the work he reviews in effect excludes that which Sutherland
and others subsequently have regarded as the core of AI. In consequence of
this exclusion Lighthill is able to argue that the amalgam of activities
called "artificial intelligence" lacks genuine coherence.

Lighthill divides the work into three categories. Category A is advanced
automation ranging from industrial and military applications to scientific
and mathematical, with the aim of replacing human work in specific areas.
Category C is research into central nervous system phenomena, both
neurobiological and psychological. Although both have to some degree
disappointed expectations, he says, progress in them is typical for
scientific research as a whole. By Category B he denotes a bridging activity
whose ambition is to make out of results from A and C an artificially
intelligent simulacra, i.e. a robots. There, he argues, the disappointments
are far more widespread and severe. Unsurprisingly he invokes Mary Shelley's
Frankenstein, but he also, rather more gratuitously, cites an argument then
floating about that robot-building is driven by a male desire for
parthenogenesis. He dismisses this argument, but still he mentions it. Both
Frankensteinian and parthenogenic resonances constitute interesting lines of
research to follow, but in context they seem to be there in order to help
cut the scientific ground out from under Category B altogether. 

In sum A and C, which already belong to well-established fields, seem to him
best at home in those fields and in process of separating for the purpose. B
he implies should be abandoned.

Sutherland's critique and Longuet-Higgins' commentary are very insightful.
Sutherland argues for B as research basic to all parts of AI. He argues that
the disparity between wild forecasts and actual achievements, however much
we may be annoyed by that wildness, points to the complexity of what humans
do and how they do it. He argues essentially negatively for the scientific
value of forecasting which falls short of actuality, since this is part of
finding out with computing how intelligent behaviour is done.
Longuet-Higgins praises the value of Lighthill's searching questions into
the justification of doing AI, but like Sutherland he argues the scientific
case. The problem with AI he fingers is the bluntly technological appeal --
the utilitarian appeal to applicability -- then commonly made, and still
made. Research into something so challenging as human intelligence cannot,
he suggests, be defended by pointing to practical fruits, realisation of
which is too far out of reach. Anyhow those fruits, insofar as they can be
realised, are the business of the fields of application to worry about; they
are fruits for them.

I suggest an analogy between Lighthill's partitioning of AI and the
structure which places humanities computing proper between computer science
on the one side and the increasingly digital humanities on the other. The
problem isn't structural in either case. Rather it's the inability to see
what the mediating activity is all about, indeed its essential role in doing
something genuinely new by tackling our fundamental ignorance about
fundamental things. In our realm that inability is analogously the result of
construing basic research in terms of its deliverables: what humanities
computing can do for digital history, digital literary studies, digital
libraries et al. This suggests (does it not?) that Realpolitik aside, as
long as the research agenda is set for humanities computing wholly by those
humanities for their own ends, what happens will be not only weak but also
vulnerable to the next Lighthill.

Comments?

Yours,WM
--
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.






More information about the Humanist mailing list