[Humanist] 23.347 the invisible middle

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Mon Oct 5 07:20:25 CEST 2009


                 Humanist Discussion Group, Vol. 23, No. 347.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    James Rovira <jamesrovira at gmail.com>                      (58)
        Subject: Re: [Humanist] 23.342 the invisible middle

  [2]   From:    Sterling Fluharty <phdinhistory at gmail.com>               (198)
        Subject: Re: [Humanist] 23.342 the invisible middle


--[1]------------------------------------------------------------------------
        Date: Sun, 4 Oct 2009 09:24:00 -0400
        From: James Rovira <jamesrovira at gmail.com>
        Subject: Re: [Humanist] 23.342 the invisible middle
        In-Reply-To: <20091004083104.5034B382FC at woodward.joyent.us>

A moment of shameless self-promotion: my forthcoming book, Blake and
Kierkegaard: Creation and Anxiety (Continuum, June 2010) attempts to
answer this very question, Willard.  It doesn't directly address AI,
but asks the question, "Why do we persistently imagine that if human
beings were to create an independent intelligence that it would result
rebel against us, usually with apocalyptic results?"  What first made
me ask this question was watching the first Matrix film (AI is related
after all) and realizing it's another of very many retellings of the
Frankenstein story.  I then looked a little further back from
Frankenstein and realized Blake's Urizen is the ur-fallen Creator --
the embodiment of Victor Frankenstein's phenomenology, not just a
person who possesses it.  I use Kierkegaard to provide a definition of
anxiety which I then apply to Blake's creation narratives, and justify
the application of Kierkegaard to Blake on the basis of a shared
intellectual history and similar social and political histories.

Socially and historically, both Blake's England and Kierkegaard's
Denmark were in the process of transitioning from monarchy to
democracy, with simultaneous affinity for both, and were experiencing
deep and interrelated tensions between science and religion and
between nature and artifice.  Simultaneous attractions and repulsions
to the same object is Kierkegaard's fundamental idea behind anxiety.

My tentative answer to this question lies in the nature of the
cultural tensions themselves.  Each of them posit a very different
conception of the human proceeding from the Enlightenment and
Enlightenment science.  So there's a very real sense that we human
beings are ourselves being recreated.  Therefore, these accounts in
which a human being creates a highly intelligent life form and has it
turn against him (or against the entire human race) are really not
about fears of technology itself.  Rather, it is a form of
mythological expression, in which the internal is made external, so
that these creation anxiety narratives represent our fears of  how we
ourselves are being recreated by technology and other social changes.
We don't really fear the external thing we are creating, but fear how
we ourselves are being recreated by ourselves, and externalize those
fears in the form of creation anxiety narratives.

As we get closer to actually achieving some of these technologies,
fears heighten.  But I think the whole debate is misconceived.  We
don't understand human intelligence enough to know if we've replicated
it, and many people who study human intelligence don't believe even
human intelligence is anything but fully determined by social and
other environmental factors.  All we can do is create a machine that
resembles the human brain in some of its functions and see what it
does, and hope that sheds some light upon how our own minds work.  I
don't know that it will until we can grow an organic brain like our
own.  Seems to me that we would be comparing two fundamentally unlike
objects.  But, we can try.

Jim R

>
> I keep wondering, why the fear? What does this fear point to? As more
> than one wise person has said, show me your monsters and I will show you
> who you are. What do the humanities have to say about the dream of the
> superhuman?
>
> Comments?
>
> Yours,
> WM
> --



--[2]------------------------------------------------------------------------
        Date: Sun, 4 Oct 2009 10:38:34 -0600
        From: Sterling Fluharty <phdinhistory at gmail.com>
        Subject: Re: [Humanist] 23.342 the invisible middle
        In-Reply-To: <20091004083104.5034B382FC at woodward.joyent.us>


Dear Willard and Amsler,

Thanks for the thoughtful and intriguing responses.  A few more questions:

Now that computers can "read" the emotion in our faces and voices, does this
mean that many humanists have mistakenly assumed that the expression and
content of human emotions are social constructions?

How many humanists would be willing to alter their models of imagination and
freedom of expression once they learn that it is impossible to use
grammatical language without obeying Zipf's law?

If humanists have a tradition of studying environmental forces and
influences, beyond the control of individuals, that shape the unfolding of
history, does this mean DH/HC will ultimately find AI unsatisfactory because
it attempts to model humans in isolation?

Is HC/DH developing theories of how and why humanist practice requires more
than just "critical thinking skills," so that we will be able to partner
with AI and point out where their modeling of human thought and behavior has
fallen short?

Will DH/HC attempt to emulate the biological and psychological processes at
work during humanist practice, or will we find it not necessary to
incorporate things like positive emotions, curiosity, sleep, mental images,
or even neurotransmitters like dopamine when extending our computational
models of the humanities?

Can the humanities find common ground with AI in the pursuit of
consciousness, since the former is interested in critical self-reflection
and the latter seeks a computational mind that is aware of itself?

Will HC/DH follow the lead of AI and complicate theories of subjective
aesthetics, long valued within the humanities, by developing algorithms that
find underlying patterns in judgments of beauty such as symmetry?

How much difference is there between the penchant for useful applications in
AI and the humanist goal of fostering a sense of civic duty?

If the superhuman in AI so far means developing computers that always beat
humans at logical games like chess, how likely is it that the superhuman
goal in DH/HC could be creating programs or bots that find solutions to
debates over truth and meaning that no unaided human could ever produce?

Best wishes,
Sterling Fluharty
University of New Mexico



More information about the Humanist mailing list