[Humanist] 26.60 between the idea and the reality?
Humanist Discussion Group
willard.mccarty at mccarty.org.uk
Sat Jun 2 00:03:20 CEST 2012
Humanist Discussion Group, Vol. 26, No. 60.
Department of Digital Humanities, King's College London
Submit to: humanist at lists.digitalhumanities.org
Date: Sat, 02 Jun 2012 08:00:30 +1000
From: Willard McCarty <willard.mccarty at mccarty.org.uk>
Subject: between the idea and the reality?
Some here may know that in 1970 Masahiro Mori, in "The Uncanny Valley"
(Energy 7.4: 33-5), argued that in the progress of robotics toward
increasingly humanoid appearance and action (where that is the goal of
the designer), human psychological response to the robot is increasingly
positive until a certain point of near resemblance. At that point, quite
suddenly, this response becomes strongly negative. In other words, we
freak out (as one used to say). Response stays negative until the
resemblance has become considerably closer and then becomes positive
again. Some will claim that James Cameron's movie Avatar marks the first
popular VR creation to have made it to the other side of this uncanny
Clearly if your goal in the design of computational devices is for the
artificiality of the device to pass unnoticed, then you want to get to
the other side of the valley as soon as possible. It might be argued,
however, that in doing so you lose big time: you lose the challenge to
our conception of ourselves. You design for a mirror, or we could say an
artificial companion or collaborator, for which a human could be
substituted. Perhaps you design for a particular kind of collaborator,
say someone who talks like a behaviourist of the Skinnerian kind, or a
perfect Chomsky linguist, or a Bakhtin. Certainly in the near to
medium-term future, the companion will be dogmatic, a stereotypical
sort. This might be rather interesting. But still I cannot help but
think that the goal needs some serious questioning.
I think that an AI person (any here correct me if I am wrong) would
argue that it's only a matter of time until this goal is close enough
that our not having thought it through would become a serious error.
Let's assume that to be the case. We are closer to the goal than many of
us may suppose. Talk to an automobile designer, for example.
What do you think? In the design of research environments what exactly
do we want to have? I would argue that what we do not want, or should
not want, is the perfect (amoral) slave. But if not that, then what?
Willard McCarty, FRAI / Professor of Humanities Computing & Director of
the Doctoral Programme, Department of Digital Humanities, King's College
London; Professor, School of Computing, Engineering and Mathematics,
University of Western Sydney; Editor, Interdisciplinary Science Reviews
(www.isr-journal.org); Editor, Humanist
More information about the Humanist