[Humanist] 23.342 the invisible middle

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Sun Oct 4 10:31:04 CEST 2009


Humanist Discussion Group, Vol. 23, No. 342.
Centre for Computing in the Humanities, King's College London
www.digitalhumanities.org/humanist
Submit to: humanist at lists.digitalhumanities.org

[1]   From:    amsler at cs.utexas.edu                                      (49)
Subject: Re: [Humanist] 23.338 the invisible middle

[2]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (54)
Subject: motivations and reactions

--[1]------------------------------------------------------------------------
Date: Sat, 03 Oct 2009 23:00:12 -0500
From: amsler at cs.utexas.edu
Subject: Re: [Humanist] 23.338 the invisible middle
In-Reply-To: <20091003070856.42E803C0BF at woodward.joyent.us>

"Isn't AI primarily intent on replicating mathematical and scientific genius?"

I think that's an old-fashioned view of what AI is about these days.
AI has taken on emotional communication and the recognition that it
will be necessary for computer interfaces to both read faces/voices
for their emotional content as well as express information back to
human beings with emotional content in it.

The first rush of expectations that people had for AI was based on a
view that human beings did things using their intellect. I think that
has largely been disproved. Human beings are the culmination of a
billion years of evolutionary development that worked out tasks such
as vision, hearing, movement, reaction to physical stimuli, etc. These
tasks are actually much harder than intellectual tasks because not
only didn't we 'invent' them, but because we don't understand how we
perform them.

I guess it is fair to ask whether the humanities has suffered from
some of the same sort of delusional thinking as AI did in the early
days. That the role of the computer in performing intellectual tasks
would provide great advances in the humanities while ignoring how much
of the humanities depended not on critical thinking but on the
biological aspects of being a living being--none of which we
'understand' how we perform at the prerequisite level for machine
emulation.

The problem is actually even worse than that. Even if we *did*
understand how we performed many tasks, we lack the computer hardware
to replicate those abilities. I.e., we're talking about hardware at
the level of molecular machines to replicate human sensory
capabilities. It is not surprising that when we do manage to create
some new computer ability to replicate a human ability, it typically
doesn't involve using the same biological or molecular components used
to perform the task in living organisms. I.e., we invent or discover a
wholly new mechanism to perform the task using physics and inorganic
chemical processes. (I think it's a really deep question to ask why,
if these capabilities exist, nature didn't choose to use them.)

So, the question for humanities computing is to first separate out the
non-critical thinking components of tasks from the critical thinking
aspects. The tasks done by critical thinking are directly mappable to
computer programs; but the other non-critical thinking components are
not. Then, the next part is hard. One has to invent a means to use the
existing hardware that civilization knows how to build to perform
aspects of biological capabilities we *think* we have an idea how to
perform; recognizing that the idea is likely to be only partially
comparable to the real biological process.

I think there is hope here. Some remarkable hardware exists today and
new capabilities come along every year. It does suggest there is a new
discipline to be created... something like 'humanities robotics' in
which humanists working with engineers attempt to create machines that
can express an emotional response to sensory data. A robot with an
aesthetic sense? A robot with moral understanding and the ability to
distinguish right from wrong?

--[2]------------------------------------------------------------------------
Date: Sun, 04 Oct 2009 09:03:38 +0100
From: Willard McCarty <willard.mccarty at mccarty.org.uk>
Subject: motivations and reactions

 
Sterling Fluharty's questions on AI and the humanities provoke some of my
own. In looking across the history of AI it's not surprising to discover
that an impression of a singular goal quickly resolves into a more diverse
and far more interesting range of visions and motivations. There are to be
sure examples of those who would replicate the human, or rather make a
superhuman artificial being. (Hence Seymour Paper's "superhuman human
fallacy", namely, in Pamela McCorduck's version, the idea that "that
machines can't be said to 'think' unless they show superhuman skills".) This
dream is so old that it's not surprising to find it dreamt anew, and in some
forms we all can be glad have not been realised. And it's not surprising to
find it the darling of the popular press, such as Time Magazine and Life
Magazine, and the nightmare of others, from earliest days. T.S. Eliot's
commentary on this sort of thing in "The Dry Salvages" comes to mind:

> To communicate with Mars, converse with spirits,
> To report the behaviour of the sea monster,
> Describe the horoscope, haruspicate or scry,
> Observe disease in signatures, evoke
> Biography from the wrinkles of the palm
> And tragedy from fingers; release omens
> By sortilege, or tea leaves, riddle the inevitable
> With playing cards, fiddle with pentagrams
> Or barbituric acids, or dissect
> The recurrent image into pre-conscious terrors—
> To explore the womb, or tomb, or dreams; all these are usual
> Pastimes and drugs, and features of the press....

I suspect that from a distance -- here a strong argument for
interdisciplinary awareness -- what happens is that (with some help) we
humanists select from AI what most pricks us on, and make a monolith of
that. Our fears conjure themselves, and have particular success with AI
because of the basic question it is asking, which I take to be this: can
we understand the human better by attempting to make it? I suspect that
this question hardens via the old doctrine of "verum factum" -- that
which is true is that which can be made, i.e. we discover the truth
about something by modelling it successfully -- into the notion that the
human can only be what can (eventually) be replicated. Then, by the Real
Soon Now Principle, this becomes the notion that whatever we cannot make
Real Soon Now, which will be to hand *very* soon if only we get the next
large grant, is only some fuzzy bit of emotional fantasy.

All this matters, especially to us, I think, because the questioning
within AI is deeply interesting from the perspective of the humanities.
And as Robert Amsler suggests, if humanities computing is to get
anywhere as a field of research (as opposed to implementation, with
novel decorations from time to time) it needs to pay attention, indeed
to strike up conversations with the folks in AI. We want intelligent
rather than stupid machines, no? Better tools rather than worse ones?

I keep wondering, why the fear? What does this fear point to? As more
than one wise person has said, show me your monsters and I will show you
who you are. What do the humanities have to say about the dream of the
superhuman?

Comments?

Yours,
WM
--
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.




More information about the Humanist mailing list