[Humanist] 23.358 angels & demons, or the invisible middle

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Thu Oct 8 07:51:35 CEST 2009


                 Humanist Discussion Group, Vol. 23, No. 358.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (66)
        Subject: angelology

  [2]   From:    "Clark, Stephen" <srlclark at liverpool.ac.uk>                (7)
        Subject: AI demons (was invisible middle)

  [3]   From:    Wendell Piez <wapiez at mulberrytech.com>                    (58)
        Subject: Re: [Humanist] 23.353 the invisible middle


--[1]------------------------------------------------------------------------
        Date: Wed, 07 Oct 2009 08:53:11 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: angelology

Bob Amsler's post in Humanist 23.353, on aiming at the best of the human 
in artificial intelligence research, deserves more than I can give it, 
but I will try. He says,

> What we need to do is create artificial intelligences  
> that embody our aspirations for the best of human qualities. That is,  
> beings capable of greater generosity, greater self-less-ness, greater  
> appreciation, greater empathy, indeed greater humanity than we expect  
> of the average person. We have an opportunity to create intelligent  
> beings who could serve as our own role models of how we would like to  
> behave.
> 
> It is curious. I seem to have specified the creation of artificial  
> intelligences as 'angels' to counteract the prevailing view of the  
> creation of artificial intelligences as some sort of 'demons' that  
> pervades much current science fiction. How would be go about creating  
> an angel?

I have three thoughts, the first on an unstated meta-aim, the second on 
an opportunity for research in the humanities, the third on a very 
difficult related problem I am having, or which is having me.

The kind of AI research I think is most attractive to the humanities, 
indeed shares the common aim of these disciplines, dwells on the 
questioning byproduct of attempting to make artificial intelligences -- 
which notices, for example, that such attempts raise the question of what 
intelligence is and provide means for sharpening it. Sometimes it seems 
a pity that an historian or philosopher, say, is not part of the 
research project and so could not only help with the questioning but 
also benefit from it. As Katherine Hayles shows repeatedly and 
brilliantly in How We Became Posthuman (1999), dealing with analogies 
between human and machine can result in a refiguring of both, and so 
itself pose a problem to which we must be alert. But at least the best 
of us is up to that job, I think. We can make analogies, test their 
strength and then move on. The meta-aim I'd like to see more attention 
given to, on both sides of the house, is the questioning, the byproduct, 
that all-important "hem of a quantum garment", as Jerry McGann says.

So to my second thought. Here, reading Amsler's post, a medieval 
intellectual historian should get quite exercised on the question of 
angelology and reach, say, for Henry Mayr-Harting's 1997 inaugural at 
Oxford, published as Perceptions of Angels in History (Clarendon Press, 
Oxford, 1998). What are we doing when we imagine, as now, angels? How 
does our activity now differ from what was going on then? How is it the 
same? How could we do angelology better, profit from it more? I have 
difficulty imagining a collaborative grant proposal that would be funded 
to include a Mayr-Harting alongside a Minsky, but wouldn't that be 
something?

My third thought. Let's say an intellectual historian got interested in 
angelology from the Ottonian Europe to Obama's America. How would this 
be written so as to strike a middle course between accident and 
determinism? (This question definitely veers off from artificial 
angelology to a central problem of historiography.) Recently I have read 
two very good books -- Paul Edwards' Closed World and Hayles' How We 
Became Posthuman -- both of which leave me with the sense of stories 
well told, with a prodigious amount of careful scholarship and 
penetrating insight, but told so as to be more coherent than life is. 
And then there are others, such as David Mindell's Between Human and 
Machine, that don't leave me with a haunting sense of claustrophobia. At 
the same time as I feel that claustrophobia I am very grateful indeed 
for the books in question. But I wonder about how to do as good a job 
without the closure.

Comments?

Yours,
WM

-- 
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.



--[2]------------------------------------------------------------------------
        Date: Wed, 7 Oct 2009 09:41:45 +0100
        From: "Clark, Stephen" <srlclark at liverpool.ac.uk>
        Subject: AI demons (was invisible middle)


First: it's not true that science fiction is dominated by the AI as Demon myth. There have been plenty of AI angels, from Asimov's Multivac and his robots to John C.Wright's Golden Transcendence. And some that are just other creatures, with a different physical base (see Robert Heinlein's The Moon is a Harsh Mistress).

Second: the chief fear has been of angels rather than demons! That is, we fear that they will act for our own good - as they conceive it - rather than for what we actually want, and our freedom to try and get it. See Jack Williamson's The Humanoids - and some of Asimov's own later work, especially as it has been amplified by Gregory Benford & co in the second Foundation trilogy.

Third: one serious problem - and it's one that has been an issue in theology as well - is that AIs won't have any experience themselves of pain and helplessness, and will therefore be unable to understand what our experience is like. This is explored a little in Rudy Rucker's Software.

Prof Stephen R.L.Clark
University of Liverpool
http://www.liv.ac.uk/info/staff/A639849
http://pcwww.liv.ac.uk/~srlclark/srlc.htm



--[3]------------------------------------------------------------------------
        Date: Wed, 07 Oct 2009 15:09:43 -0400
        From: Wendell Piez <wapiez at mulberrytech.com>
        Subject: Re: [Humanist] 23.353 the invisible middle
        In-Reply-To: <20091007060737.A63603DA3D at woodward.joyent.us>

Willard and HUMANIST,

At 02:07 AM 10/7/2009, you sent:
>--[3]------------------------------------------------------------------------
>         Date: Tue, 06 Oct 2009 23:47:37 -0500
>         From: amsler at cs.utexas.edu
>         Subject: Re: [Humanist] 23.347 the invisible middle
>
>I would offer this hopeful view of the future creation of artificial
>intelligences. What we need to do is create artificial intelligences
>that embody our aspirations for the best of human qualities. That is,
>beings capable of greater generosity, greater self-less-ness, greater
>appreciation, greater empathy, indeed greater humanity than we expect
>of the average person. We have an opportunity to create intelligent
>beings who could serve as our own role models of how we would like to
>behave.
>
>It is curious. I seem to have specified the creation of artificial
>intelligences as 'angels' to counteract the prevailing view of the
>creation of artificial intelligences as some sort of 'demons' that
>pervades much current science fiction. How would be go about creating
>an angel?

What a great response.

It's especially interesting given the context. As indicated by the 
citations that started this thread, the quest for AI is usually not 
proposed with the intention of creating intelligence only for its own 
sake. At least in part (and apart from any notion of usefulness), it 
is a research project based on the idea that in order to create an 
artificial intelligence, we have to develop our understanding of what 
an intelligence is and how it works.

Making an AI by accident, as in some dystopian fantasies in which we 
wake up to discover the phone system or the orbital defense network 
has awakened while we were sleeping (like SkyNet in Terminator, also 
cited in this thread), doesn't count. Even if it's already happened. 
What are computer viruses if not the network's way of exploiting 
human psychology to immunize itself against breakdown? By enabling us 
to send spam, and viruses, and virus alerts, the Internet causes us 
to devote more energy and resources to itself and its (and our) weaknesses.

So, does creating an angel by accident count? Or do we have to 
understand what it is to be an angel before we can create an angel of 
any real use to us? After all, there are already angels -- at least 
of some variety, or at least I think so -- but somehow they don't 
ultimately manage to rescue us from ourselves.

Maybe that's because in order to be rescued by angels -- in order to 
recognize them or make them or be made angels by them -- we need to 
discover and realize our own angelic capacities.

And if we can accomplish that, we may not need the machines, except 
as instruments upon which we can play praises to our own and their creation.

Cheers,
Wendell

=========================================================
Wendell Piez                            mailto:wapiez at mulberrytech.com
Mulberry Technologies, Inc.                http://www.mulberrytech.com
17 West Jefferson Street                    Direct Phone: 301/315-9635
Suite 207                                          Phone: 301/315-9631
Rockville, MD  20850                                 Fax: 301/315-8285
----------------------------------------------------------------------
   Mulberry Technologies: A Consultancy Specializing in SGML and XML
=========================================================





More information about the Humanist mailing list