[Humanist] 28.916 robots; continuities & universals

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Mon Apr 27 07:21:43 CEST 2015


                    Humanist Discussion Group, Vol. 28, No. 916.
           Department of Digital Humanities, King's College London
                        www.digitalhumanities.org/humanist
               Submit to: humanist at lists.digitalhumanities.org

[1]   From:    "Charles M. Ess" <c.m.ess at media.uio.no>                  (145)
       Subject: Re:  28.913 more than robot?

[2]   From:    Paul Fishwick <metaphorz at gmail.com>                      (141)
       Subject: Re:  28.914 comments on continuities and universals

--[1]------------------------------------------------------------------------

      Date: Sun, 26 Apr 2015 10:30:15 +0200
      From: "Charles M. Ess" <c.m.ess at media.uio.no>
      Subject: Re:  28.913 more than robot?
      In-Reply-To: <20150426075120.CFBE76183 at digitalhumanities.org>

Funny you should mention ... I've had a long-standing interest in these
matters since the 1960s, and have pursued them in conjunction with the
"computational turn" as it was called in the States, starting in the
1970s, which conjoined philosophical analysis and reflection with
emerging developments in computer science, computation, AI, etc.  For
the past three years or so I have focused more specifically on social
robots - including co-editing a special issue of the International
Journal of Social Robotics that will come out shortly.

Most recently, I've put together a paper that summarizes a good dose of
the contemporary findings and states of the arts regarding the strengths
and limits of computation, including AI, and social-robotic specific
work on such things as artificial emotion and embodiment.

_Contra_ the strong optimism surrounding what is now called Good Old
Fashioned AI (GOFAI) in the 1970s-1990s, there seems to consensus in at
least many philosophical quarters that a number of aspects of being
human remain computationally intractable, and may well remain so for
decades, if not centuries to come (Ray Kurzweil is obviously not in this
camp: no comment).

These include: reflective self awareness as a genuinely autonomous
being; specific forms of judgment and reasoning critical to ethical
reflection and decision-making, including _phronesis_ and analogical
reasoning (especially necessary for casuistic approaches to ethics, for
example); and emotional self-awareness, which has emerged over the past
couple of decades as being necessary for ethical decision-making
processes in humans.

Some of these can be "punted" one way or another - approximated in
varying degrees, so that, e.g., John Sullins speaks of "artificial
autonomy," artificial ethical agency, and artificial phronesis to
demarcate what is possible for machines as limited in comparison with
their human counterparts.

A key problem, however, is implementing emotional self-awareness -
"artificial emotion" is rather the project of faking the expression of
emotion with sufficient sophistication as to trigger human belief and
anthropomorphic responses.  This can be quite useful, e.g., in robots
designedj for elder care, autism spectrum disorders, etc. But
implementing something close to a human sense of emotional
self-awareness as an embodied being seems so intractable that there is
apparently little effort still focused on it.

In these lights, I and Sullins have argued - e.g., against Levy's
optimism regarding robots as lovers - that specific forms of human love
will likewise remain computationally intractable: what Plato accounted
for in terms of _eros_ - and what feminist-phenomenological Sara Ruddick
describes as "complete sex".  The latter requires, among other
conditions, the emotional awareness of desire - not simply for the Other
(as an autonomy and as an equal), but rather a mutuality of desire,
including the desire that my desire (for the Other) be desired (by the
Other).

What is also helpful, in my view, with these accounts is that they
further link to virtue ethics - specificaly, our having to acquire and
practice virtues requisite to friendship and love, including e.g.
patience, perseverance, empathy, respect for persons as equals, and
loving itself as a virtue.

A first upshot of this analysis is that social robots, whatever other
strengths and capacities they may develop and benefit us with, will
remain incapable of emotional self-awareness as an embodied being that
includes genuine sexual desire within erotic relationships.

But a second upshot of this analysis is that if human beings are to
be/become distinguishable from social robots in general and sexbots in
particular - we in turn are enjoined to acquire and practice the virtues
that make us better friends and lovers.

The paper will appear this year in an anthology from Ashgate, edited by
Marko Nørskov (Aarhus University), titled _Social Robots: Boundaries,
Potential, Challenges_.  If anyone's interested, I can share a
pre-publication version for scholarly / fair use work.

Of course, critical comments and suggestions welcome -

Charles Ess
Professor in Media Studies
Department of Media and Communication
University of Oslo
c.m.ess at media.uio.no

On 26/04/15 09:51, Humanist Discussion Group wrote:
>                   Humanist Discussion Group, Vol. 28, No. 913.
>              Department of Digital Humanities, King's College London
>                         www.digitalhumanities.org/humanist
>                  Submit to: humanist at lists.digitalhumanities.org
>
>
>
>          Date: Sun, 26 Apr 2015 07:42:29 +0100
>          From: Willard McCarty <willard.mccarty at mccarty.org.uk>
>          Subject: more than a robot?
>
>
> In 1941 the American engineer Harold Hazen (student of Vannevar Bush in
> the 1930s) wrote an influential memo to Warren Weaver, "Theory of
> Servomechanisms", in which he sketched the relation between humans and
> automatic control systems. The significance of this memo for the history
> of control engineering and cybernetics, and so for the computing we have
> inherited, is explained by David Mindell in DesignIssues 29.1 (2013):
> 30-37, which reprints Hazen's text. But here allow me to focus on a point
> Hazen makes at the end of it.
>
> After describing the human-machine link in these systems he writes,
>
>> This whole point of view of course makes the human being in this
>> capacity nothing more nor less than a robot which, as a matter of
>> fact, is exactly what he is or should be.
>
> But he then goes on to point out that unlike the automatic systems then
> in existence, the human operator (whom he, in the midst of World War
> II, is considering as a tracker of a fast-moving aircraft in a fire-control
> system),
>
>> can determine from the orientation of a moving object something of
>> its expected acceleration which may help him in anticipating its
>> future movement. Furthermore, he is endowed with a memory that should
>> permit him to extrapolate from the past history into the future, a
>> feature that is not possessed by any of the simpler mechanisms we
>> have present.
>
> He then concludes that,
>
>> we can expect to do rational design leading to best over-all
>> performance only if we know the fundamental dynamic characteristics
>> of the human links and incorporate these into our designs as we have
>> found we must include all of the dynamic characteristics of component
>> parts in the successful design of mechanical automatic control
>> systems.
>
> Yes, of course, most of us don't react particularly well when confronted
> with the idea of a human as robot, "which, as a matter of fact, is
> exactly what he is or should be" under the circumstances Hazen
> describes. But note the greater-than-machine and where it fits into the
> process of research design, which is exactly Hazen's point. Are we not
> (changing what needs to be changed) involved in the same process? Should
> not all our energies be focused on that greater-than-machine, attempting
> to implement it so that another greater-than can be illumined?
>
> Comments?
>
> Yours,
> WM
>

--
Professor in Media Studies
Department of Media and Communication
University of Oslo

Director, Centre for Research in Media Innovations (CeRMI)
Editor, The Journal of Media Innovations
<https://www.journals.uio.no/index.php/TJMI/>
President, INSEIT <www.inseit.net>

Postboks 1093
Blindern 0317
Oslo, Norway
c.m.ess at media.uio.no

--[2]------------------------------------------------------------------------

        Date: Sun, 26 Apr 2015 09:04:42 -0500
        From: Paul Fishwick <metaphorz at gmail.com>
        Subject: Re:  28.914 comments on continuities and universals
        In-Reply-To: <20150426075631.0EAB52DDD at digitalhumanities.org>

Willard

You raise many good questions. Here are some thoughts and I would delight in
hearing your responses and the thoughts of others. Permit me to take one of your
points here for a text I have not yet read:

>> What the author has
>> brilliantly set out to do is outline the full dimensions of the vast
>> upheaval around us, clearly defining both its positive and negative
>> aspects and cogently arguing for a response that will make man master
>> rather than slave of his own inventions. 

As someone who dabbles in the disciplines that are inside of the technology
all-in-one buzzword (mathematics, science, and engineering), I find myself
wondering why we try to separate human from machine, when it seems that when
we picked up a stick to beat the bushes, we became one with technology, with
the machine, and became natural parts of it. This is not recent, nor
shocking. We've been integral part of machines in the abstract information
sense (e.g., clan rituals, government, social organizations) for millennia.
I read recently that evidence of the first ritual (a type of machine) dates
to 70,000 years ago. 

In some recent emails in my organization, I have discovered that some
imagine that there is a thing called technology which is something outside
of them, but nothing could be more mistaken. As soon as you flip a light
switch, or use a computer program, not only does your thinking change, your
epistemology fundamentally changes whether or not you like it. Perhaps this
is why children are so adept at adopting new modes of thinking as a result
of technology? Their thinking has changed. They think and operate
differently.

Is this a bad thing? This is where we study value systems and how each
discipline deals with the situation. But are there those that labor on the
idea that their thought processes are immune from the stick used to beat the
bushes, or the word processor used to write email responses? How can one
logically externalize "machine" when the biochemical processes that operate
inside of us have most of the characteristics necessary to define a
machine.

If you and others can direct me to books that discuss this issue in depth, I
would appreciate it. Specifically, I am interested in any authors whose
point is that we are naturally machines, and parts of machines. I know I can
return to Descartes but I am seeking more recent humanities-situated
scholarship on our inherently (and naturally) cyborg nature.

-paul

On Apr 26, 2015, at 2:56 AM, Humanist Discussion Group
<willard.mccarty at mccarty.org.uk> wrote:


>                  Humanist Discussion Group, Vol. 28, No. 914.
>         Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
>
>
>
>        Date: Sat, 25 Apr 2015 21:56:34 +0100
>        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
>        Subject: continuities & universalities
>
>
> It is, it seems, inevitable that we approach the subjects we study with
> pendulum-like variation from an exclusive focus on discontinuities and
> local variations to the opposite, on continua and universals. A
> concentration on one brings the other eventually to the fore. The
> history of the history and philosophy of science in the last century,
> for example, swung from the universalism of The Scientific Method and
> efforts toward a unified science on the one hand to the diversity of
> methods, or no method at all, disunity of the emphatically pluralized
> sciences and "local theory" (Peter Galison's phrase). But at its height
> seeing diversity everywhere has provoked protest. Jed Buchwald and Allan
> Franklin have taken specific aim at "the tendency to regard science as
> purely local and contextual" in their introduction to Wrong for the
> Right Reasons (Springer, 2005). Closer to home Evelyn Fox Keller has
> taken issue with disunity in Peter Galison's and David Stump's The
> Disunity of Science (1996), where she begins with objections to "much in
> contemporary postmodern discourse, with its insistent emphasis on
> ruptures, fractures, and disunities" in order to discuss illuminating
> continuities that, among other things, remind us "of the power of
> cumulative amnesia" to erase history.
>
> This amnesia is greatly fuelled by the dominant rhetoric of
> technological progress and its constant companion, technological
> determinism, which hand in hand obliterate history and have us forever
> thinking we're just now on the verge or on the brink, depending on the
> mood of the moment, for the very first time. I've been reminded again of
> the need for continuities to counter this amnesia by Emmanuel G.
> Mesthene's little book, Technological Change: Its Impact on Man and
> Society (New York: New American Library, 1970). Mesthene, by
> the way, was Director of the Harvard Program on Technology and
> Society, established in 1964 with a grant from IBM "to undertake an
> inquiry in depth into the effects of technological change" (p. 91).
> Mesthene and his Program was responsible for many publications,
> e.g. "How technology will shape the future", Science NS 161.3837
> (1968): 135-43.
>
> There are signs of age throughout his book, as the title already signals
> by the use of "Man". But there's much that has not changed. The back
> cover of this deteriorating paperback, printed on paper made from
> wood-pulp, informs us of the then popular reaction to technological
> change:
>
>> TECHNOLOGICAL CHANGE IS:
>> (1) GOOD
>> (2) EVIL
>> (3) OVERRATED
>
> (Note how easily these three alternatives are translated into 21st
> Century terms.) "The truth, however, is far more complex than any of
> these points of view suggests", Mesthene goes on to inform us. And we
> still need to be told that.
>
>> As this penetrating study points out, the accelerating rate of
>> technological change is outstripping traditional categories of
>> thought in the same way that it is rendering obsolete many
>> established institutions and values of society. What the author has
>> brilliantly set out to do is outline the full dimensions of the vast
>> upheaval around us, clearly defining both its positive and negative
>> aspects and cogently arguing for a response that will make man master
>> rather than slave of his own inventions. The result is a volume that
>> offers us remarkably stimulating new perspectives on a problem that
>> can be ignored only at our gravest peril.
>
> This was written in the U.S. at the height of the Cold War, so perhaps
> the sense of urgency dates and localizes it as well. But again, what
> impresses me are the continuities. It is for this reason that I am
> deeply suspicious of all the post-isms by which we are currently afflicted.
>
> Comments?
>
> Yours,
> WM
>
> --
> Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
> Humanities, King's College London, and Digital Humanities Research
> Group, University of Western Sydney

Paul Fishwick, PhD
Chair, ACM SIGSIM
Distinguished University Chair of Arts & Technology
and Professor of Computer Science
Director, Creative Automata Laboratory
The University of Texas at Dallas
Arts & Technology
800 West Campbell Road, AT10
Richardson, TX 75080-3021
Home: utdallas.edu/atec/fishwick
Lab Blog: creative-automata.com
SIGSIM Blog: modelingforeveryone.com




More information about the Humanist mailing list