[Humanist] 25.339 master or servant

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Sat Oct 1 06:37:48 CEST 2011


                 Humanist Discussion Group, Vol. 25, No. 339.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Wendell Piez <wapiez at mulberrytech.com>                    (91)
        Subject: Re: [Humanist] 25.336 master or servant

  [2]   From:    amsler at cs.utexas.edu                                      (54)
        Subject: Re: [Humanist] 25.336 master or servant


--[1]------------------------------------------------------------------------
        Date: Fri, 30 Sep 2011 13:13:25 -0400
        From: Wendell Piez <wapiez at mulberrytech.com>
        Subject: Re: [Humanist] 25.336 master or servant
        In-Reply-To: <20110930052052.D3EA01C1DF8 at woodward.joyent.us>

Dear Willard and HUMANIST,

On 9/30/2011 1:20 AM, Humanist Discussion Group wrote:
>> And I wonder: is the fondness for making computers into morally
>> neutral servants so attractive because it turns aside the terror of
>> having an artificially intelligent companion staring us in the
>> face? And so dismisses the opportunity for us to grow by leaps and
>> bounds?
>
> I find that instead of thinking in binary terms of master/slave, we
> can understand our relationship to computers as a form of distributed
> cognition, a notion developed by Edwin Hutchins (and probably others)
> in his book Cognition in the Wild.  Human will and intelligence are
> no longer at the center of human activity:  they are part of a wider
> network of interactive (perhaps even emergent) processes of which
> neither humans nor machines is in complete control.  In this sense,
> humans and machines coexist in a symbiotic relationship.  How long
> has this relationship been around?  At least since the invention of
> clocks.  From what little I know of anthropology, human thought has
> always been determined in part by embodiment and the use of tools and
> other objects for interacting with the environment.  Humans always
> think within an environment that exceeds what they can know.
> Cartesian dualism has led us away from this truth.

Thanks to Mark for this insightful post. I think it's a valuable 
corrective to Willard's question. (Willard, Allen said "drudge", not 
"servant". I think the difference is significant. As you know, drudgery 
is regarded in some traditions as potentially a path to enlightenment; 
in any case, I dare say, it is performed as often out of love as it is 
under duress or in fulfillment of a contract. To say computers embody 
love may be a bit of a stretch, but the imps in the machine do not work 
to fulfill a contract either, or at least none they know of; their 
drudgery is unconscious and thus arguably not drudgery at all. And love 
as much as labor may go into the work of the engineers who deploy and 
programmers who command them.)

As for Mark's comment, I agree that Edwin Hutchins' work is highly 
relevant -- while I think that this kind of distributed intelligence 
goes back far longer than the clock. We see it in packs of wolves and we 
would undoubtedly see it in teams of hunters stalking megafauna, just as 
we see it on the playing field, or among the crews of the ships Hutchins 
studies.

What the clock and then the computer do differently, I think (in degree 
if not in kind), is in the support they give to the formalization and 
codification of process, thereby allowing a certain kind of intelligence 
(a dumb kind, inasmuch as its responsiveness is limited by its 
specification) to be externalized, rendered impersonal and freed from 
time and space. This is what enabled the assembly lines of the 
nineteenth century to generate, at superhuman scale, their quantities of 
guns and then carriages and cars; and it is what allows a lone scholar 
to have the words in a text counted for him or her -- if only we can 
determine and agree first on what we mean by "text", "word" and "count".

Yet it comes at a price. Because these processes are now external, we 
can be oblivious to and in some way alienated from them even while we 
depend on them. In fact, we must be, if only because to understand them 
requires so many brains. There is no single person anywhere who 
understands, in detail, everything that happens inside the computing 
machinery under my fingers.

For this reason, I am less troubled by the fear of an artificially 
intelligent companion staring back at me, servant or not, than I am by 
the knowledge that there are deeper processes underlying these 
processes, on which they in turn depend, and the fear that those 
processes (for example, the mining of rare metals, or the industrial 
processes by which they are assembled), performed on my behalf, may 
implicate me in ways I don't even imagine. It becomes a mystery whether 
the drudgery actually disappears, or has only been concentrated and 
displaced. Mark suggests that none of us is in control and that we and 
our machines are symbiotic; I think this is true, although if we 
distinguish us from our machines at all (rather than saying we are one 
fabric), I'd rather say the machines -- so far -- only provide part of 
the platform and medium for our symbiosis, and participate (again, so 
far) only as proxies. But if individual human will and intelligence are 
no longer at the center (assuming they ever were), where does it leave 
us as individuals? Where does it leave me?

The only answer I can give to that might lie in that idea of 
responsiveness. The machine, so far, can only respond to what it is 
programmed to expect. But at least as long as I exercise will and 
intelligence, I can plan and prognosticate, and recognize and respond to 
the unexpected. (Not only is the mind bigger than it knows, it is bigger 
than it can know.) In such responsiveness, I like to imagine, is the 
possibility of responsibility. And this is the drudgery, I think, that 
really counts.

Cheers,
Wendell

-- 
======================================================================
Wendell Piez                            mailto:wapiez at mulberrytech.com
Mulberry Technologies, Inc.                http://www.mulberrytech.com
17 West Jefferson Street                    Direct Phone: 301/315-9635
Suite 207                                          Phone: 301/315-9631
Rockville, MD  20850                                 Fax: 301/315-8285
----------------------------------------------------------------------
   Mulberry Technologies: A Consultancy Specializing in SGML and XML



--[2]------------------------------------------------------------------------
        Date: Fri, 30 Sep 2011 12:58:36 -0500
        From: amsler at cs.utexas.edu
        Subject: Re: [Humanist] 25.336 master or servant
        In-Reply-To: <20110930052052.D3EA01C1DF8 at woodward.joyent.us>

I think it is important to note that human beings tend to add human  
features to everything. Three circles are seen as two eyes and a  
mouth. A Roomba vacuum cleaner is seen as having a personality. Grand  
Chess Masters and Jeopardy players see human traits in computer  
opponents. We see automobiles and machinery as being tempermental.

To some degree this is how we cope with the world's complexities.  
Having a model of how we operate we infer the same traits to things  
that we interact with; especially if they exhibit non-random behavior  
(sometimes even if they do exhibit random behavior -- like sacrifices  
to the volcano god)

However, I'd venture to say that computers don't share our  
perspective. They only have their logic to guide them. Thus, when we  
discuss the computer in a master/slave relationship, we are actually  
discussing either someone else's intention in creating an interface to  
present to us--or our own mind's interpretation of a set of logic  
rules operating together. Now, this isn't to say that the human  
designers of computers couldn't be intentionally adding logic to give  
the illusion of personality, and such efforts will undoubtedly get  
more sophisticated in the future; but the master/servant paradigm is a  
little constrictive of the range of relationships that we might have  
with computers and probably is more revealing of the minds of the  
humans perceiving that relationship than what is actually happening.  
I.e., if you fear computers then it is a comfort to think of 'good'  
computers as servants; and 'bad' computers as masters. If you do not  
fear computers (any more than you rationally fear automobiles, garbage  
disposals, or lawn mowers), you would basically just build a mental  
model of them based on inferred capabilities and known inabilities.  
You don't trust your lawnmower's judgement about what it should cut  
and should not cut. But you don't doubt its capability to cut things  
when it runs over them. I am the master of my lawnmower. My lawnmower  
is my servant---but I wouldn't try to prove that by sticking my foot  
under it and assuming it wouldn't hurt me because it knows that  
relationship. The relationship is entirely in my mind. The lawnmower  
doesn't care.

Computers don't care what you think about them (unless someone  
programs them to act as though they care--and people are doing that  
already). It's a convenience to have machines respond as if they were  
human and had human feelings. A convenience to the accomplishment of  
their intended goals added to their logic by the people who design  
them. Maybe people will become more gullible if machines act more  
human-like. Would this be a good thing? Anti-lock brakes were a good  
thing, but they required people who knew how to pump their brakes  
instead of slamming them on hard to readjust their behavior since the  
brakes were now more intelligent and expected people to slam on the  
brakes (and would ignore that behavior) but didn't expect people to  
pump the brakes when faced with an imminent fear of the vehicle being  
out of control.

So, it will be with computers. As we make them more intelligent, to  
compensate for the possible lack of training of people to use them,  
they will not do what their predecessor computers did; but do what  
their designers thought they should do when faced with what it was  
expected human beings would do without instruction on how the computer  
works. This is an effort at symbiosis. It is an effort at merging  
human expectations with computer responses.





More information about the Humanist mailing list