[Humanist] 23.726 persistent fear

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Mar 26 09:53:07 CET 2010


                 Humanist Discussion Group, Vol. 23, No. 726.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Wendell Piez <wapiez at mulberrytech.com>                    (66)
        Subject: Re: [Humanist] 23.720 persistent fear?

  [2]   From:    amsler at cs.utexas.edu                                      (31)
        Subject: Re: [Humanist] 23.721 persistent fear


--[1]------------------------------------------------------------------------
        Date: Thu, 25 Mar 2010 12:30:57 -0400
        From: Wendell Piez <wapiez at mulberrytech.com>
        Subject: Re: [Humanist] 23.720 persistent fear?
        In-Reply-To: <20100324063212.1FBAA5063B at woodward.joyent.us>

Dear Willard,

In my day-to-day, I don't much encounter the fear you describe, but I 
certainly hear echoes of it all around me.

However, the noise I hear isn't specifically from people who fear 
computers and their supposed capabilities, actual or potential. 
Indeed, (like you, I suspect) I find this kind of thing actually 
rather quaint and touching. In my opinion, we have much more to fear 
from our computers than that they will learn to read poems. Indeed, 
if that's what the computers of the world were doing, I'd feel 
better. I am not naturally a fearful person, but I'm afraid that the 
fear is altogether justified.

What people fear isn't the "intelligent" computer, but the stupid 
"system" that uses it.

The metaphorical creature we would not want to share a room with is 
already here. It envelops and sustains us. It does not just share the 
room (invited in with a signal from a remote control); it owns it. It 
is capricious, implacable, and unaccountable, and our lives depend on it.

Whether it be the fear of bankers and financiers, of health-care 
providers or the lawmakers who seek to regulate them, of automobiles 
that literally run away with their drivers, or only of "number 
crunchers" with spreadsheets, who get to decide whether we are being 
"productive" enough, or maybe our jobs should be "downsized" ... in 
very many respects this fear is entirely justified and warranted. We 
place ourselves and our livelihoods in the hands of programmers and 
bureaucrats, sheltered and shut in like ourselves, and demand that 
they act fairly and according to regulations, yet protest when we are 
ruled and overruled. And why shouldn't we? Justice is blind, okay, 
but what is justice without mercy?

I think the metaphorical creature with his foot in the door is simply 
a synecdoche for something else. The poetry-reading computer -- the 
*interpreting* computer -- isn't a threat in itself. It's the use to 
which it can and (we have every reason to expect) will be put, not by 
ourselves (we think) but by faceless others, that is threatening. If 
it can read poetry, we wish to know, what else can it read? The fact 
that computers, as appliances, are so ubiquitous, so familiar and so 
necessary (how many of us would now literally be screaming without 
our mobile devices?) only underlines and defines this problem and how 
we are implicated within it.

Cheers,
Wendell

At 02:32 AM 3/24/2010, you wrote:
>Yesterday, in a class I teach to PhD students from a variety of
>disciplines, the subject of computer science and its ambitions came up.
>I tried to explain in terms I thought would be fully acceptable to
>people in the humanities and social sciences what could now be done,
>e.g. with literary language, and how what could be done raised very
>interesting questions of the sort that such people ordinarily entertain.
>But I was in for a surprise. Perhaps I should not have been surprised by
>the reactions of a mature student, now retired and pursuing his degree
>for the love of the subject, who thought these advances in computing
>represented a "foot in the door" of a metaphorical creature we would not
>want to share a room with. But clearly the younger sorts were bothered
>as well, and one of them volunteered afterward that he thought the heads
>of department at a recent gathering he attended would not be welcoming
>either.
>
>Now fear of computing is one of my favourite subjects....

==========================================================
Wendell Piez                            mailto:wapiez at mulberrytech.com
Mulberry Technologies, Inc.                http://www.mulberrytech.com
17 West Jefferson Street                    Direct Phone: 301/315-9635
Suite 207                                          Phone: 301/315-9631
Rockville, MD  20850                                 Fax: 301/315-8285
----------------------------------------------------------------------
   Mulberry Technologies: A Consultancy Specializing in SGML and XML


--[2]------------------------------------------------------------------------
        Date: Thu, 25 Mar 2010 11:59:16 -0500
        From: amsler at cs.utexas.edu
        Subject: Re: [Humanist] 23.721 persistent fear
        In-Reply-To: <20100325062715.755F74F2C2 at woodward.joyent.us>

One problem I've encountered with humanists who approach computing  
issues is the way in which some pose their questions. They often begin  
with a question format such as "Can the computer do such and such a  
task?"  This evidences a fundamental psychological premise that needs  
to be immediately corrected.

The problem is that computers consist of hardware designed by people  
and run software written by people. The subtle implication of asking  
whether "the computer" can do something quite naturally adds an  
element of fear to the question because it leaves open the issue of  
the computer's intelligence and free will. Until the questioner  
understands that they are asking whether some human beings have  
figured out how to construct/instruct a machine to carry out some task  
it is best to halt the conversation and explain why the question as  
posed doesn't make sense.

While in many other cases it may seem trivial that the questioner  
doesn't assume the presence of intelligence and volition on the part  
of the entity being inquired about ("Can this book tell me about such  
and such?" "Can this bridge take me across the river safely?"), in the  
case of computers there isn't necessarily the same appreciation of  
inanimateness. At least, I've seen the possibility for an epiphany  
taking place when the distinction is pointed out.

Humanists of all people should appreciate the significance of the  
concept of people being behind computers and computing. Good people,  
bad people, intelligent people, ignorant people, socially adept  
people, socially inept people, all kinds of people.

When a computer doesn't perform as expected I'd much prefer someone  
say, "Who designed this to work this way?" than "The computer doesn't  
understand me".

Remember the scene in the Wizard of Oz when the Great and Powerful Oz  
is answering Dorothy's questions.... and the revelation that the man  
behind the curtain was the one controlling it.





More information about the Humanist mailing list