[Humanist] 29.178 machines: reading, thinking, creating

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Tue Jul 21 01:11:43 CEST 2015

                 Humanist Discussion Group, Vol. 29, No. 178.
            Department of Digital Humanities, King's College London
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    James Rovira <jamesrovira at gmail.com>                      (58)
        Subject: Re:  29.174 machines: reading, thinking, creating

  [2]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (28)
        Subject: disenchantments

  [3]   From:    Tim Smithers <tim.smithers at cantab.net>                    (84)
        Subject: Re:  29.174 machines: reading, thinking, creating

        Date: Sun, 19 Jul 2015 16:48:18 -0400
        From: James Rovira <jamesrovira at gmail.com>
        Subject: Re:  29.174 machines: reading, thinking, creating
        In-Reply-To: <20150719202414.1610E65CE at digitalhumanities.org>


To be sure, machines cannot yet ascend to the level of emergent complexity
> that we rather vaguely call
> creativity . . .

That was the full extent of my first point. Of course I can't predict what
machines will be able to do in the future.

My other point was about, in fact, the vagueness of the word "creativity."

But I'm curious why you think this is important:

"Indeed, are not our text-embedding initiatives, our cross-referenced
data bases, our multi-agent simulations of key historical events, our 
digital collections of music, graphic and visual art, add whatever field 
of humanities you like, are we now paving the way for the 
disenchantment of the human?"

If we're the authors of our own disenchantment, our place in the cosmos
hasn't budged an inch. Why are we so important that we need to be knocked
down, and we must be very important indeed if only we can do it to

What I think we need to be is better curators, and more willing to enchant
everything else, not just ourselves. We can level things by cutting
everything down or by building everything up. Why go the former way?


Thanks for asking. I realized after sending my post that I shouldn't have
used the "f-word" without explanation. The premise of the Turing test is
the ability of machine communication to be indistinguishable from human
communication by a human evaluator. I would say that the Turing test
becomes "fascist" when it is used to determine if the machine possesses
intelligence comparable to a human being because it assigns "intelligence"
according to the judgment of an outside evaluator. Whatever privileges,
rights, or recognition that go along with having human intelligence need to
be assigned not by external recognition but by the thing's very nature: in
other words, it doesn't have that recognition because we assign it, but
because we owe it that recognition due to the nature of its being.

This distinction may seem hair-splitting in practice, but I think not.
Thinking that we have the right to bestow recognition of intelligence upon
a thing -- and that it doesn't have it until we do -- is very different
from thinking we have the obligation to recognize intelligence anywhere
that it appears. I think this is the failure of the programmer in Ex
Machina, who like Victor Frankenstein primarily failed in his recognition
of his own responsibility to this new life. I would think that if the thing
doesn't *want* to be imprisoned we immediately have to ask if we have the
right to imprison it.

Don's mention of religion seems particularly appropriate at this point. One
European sect believed that cruelty to animals was acceptable because
animals don't have souls. But another way of thinking might be that we
shouldn't be cruel to animals because cruelty itself is bad regardless of
the object.

Jim R

Dr. James Rovira
Associate Professor of English
Tiffin University
Blake and Kierkegaard: Creation and Anxiety
Continuum 2010
Text, Identity, Subjectivity

        Date: Mon, 20 Jul 2015 06:54:43 +1000
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: disenchantments
        In-Reply-To: <20150719202414.1610E65CE at digitalhumanities.org>

The parallel Don Braxton has drawn between enchantment with religious 
experience and enchantment with the human as categories beyond 
scientific reasoning is very helpful in advancing our discussion. I have 
no problem with the ongoing process of being dis-illusioned -- as he 
says, the removal of an illusion seems a good thing, though often a 
traumatic one. I don't question the value of entertaining the 
possibility that we will figure out how to build machines that will 
dis-illusion us about certain characteristics of the human we currently 
hold to be inviolate. But I do question the apocalyptic conclusion that 
it's all over for us or is certain to be sometime soon, that it's really 
only a matter of time. Revising the idea of the human following on great 
shocks has been going on for a very long time -- much longer than 
Freud's "grosse Kränkungen", "great outrages", which he attributes to 
Copernicus, Darwin and himself. I like to think of these as punctuations 
in the history of punctuated equilibria -- Gould's and Eldredge's term 
-- going back to and including the evolutionary, and going forward 
beyond Freud into the future.

The point for us, I would argue, is what we do step by step as digital 
tools bite or appear to bite ever deeper into illusions about our 
cultural artefacts. What then, at each point, do we say 'creativity' 
is, for example, or intelligence? Involvement here makes us one 
of the humanities, I'd think.

Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Digital Humanities Research
Group, University of Western Sydney

        Date: Mon, 20 Jul 2015 10:28:44 +0200
        From: Tim Smithers <tim.smithers at cantab.net>
        Subject: Re:  29.174 machines: reading, thinking, creating
        In-Reply-To: <20150719202414.1610E65CE at digitalhumanities.org>

Dear Pat, 

May I butt in here, and mostly reply to others. 

The Turing Test (TT) is not a definition of life or
intelligence; not a fascist one or any other kind.

There is, as you probably know, a long running competition for
what are now called Chat-Bots (yuk!)  that uses a TT setup to
evaluate the performance of these things.

The TT, and competitions that use it, have not figured in main
stream AI: now or in the past.  It's not useful, either as a
way to evaluate some artificial intelligence performance, or
as a central component of a productive AI research programme.
It's not clear how it provides a fair basis for evaluating
some (human-like) artificially intelligent behaviour.  Put
simply, it fails to operationalise what I call the artificial
flower/light distinction: artificial flowers can look very
like real flowers, but, no matter how real flower-like they
look, they will never be real flowers; artificial light can
look very like real (natural) light, and it is real light,
albeit produced by artificial means, and it can look quite
different from natural light, but is still real light.

A "artificial flower or light" test can usefully be applied to
computational creativity--is it look alike creativity, or real
creativity artificially produced--but I think Willard's
questions are more important.  In particular, his final
question: Where are the artists in the discussion?  The
musicians?  To which I would add, why is it that these people
seldom speak of being creative, or seeking to be creative when
they make their art?

I only speak from (limited) personal encounters here, and
mostly with designers, not so many artists and musicians, but
when asked about being creative they mostly have little if
anything to say, except that this is not what they are trying
to be when designing things: being creative is not part of
their doing when they are designing.

What would help, I think, in this conversation about
computational creativity, would be some explanation and
justification for why "creativity" is taken to be the
important thing here, apparently above all else, rather than
the actual human doing involved: the painting, composing,
designing, etc.  There's plenty here that needs more and
better knowing and understanding, especially when this doing
involves using computational tools and machines, as it more
often does these days.  Artists have always been leaders in
the devising, making, and taking up of tools in their doing,
and in the development of technologies needed to render these
tools and machines.  Why would we want to leave these people
out of our investigations?

Best regards,


> On 19 Jul 2015, at 22:24, Humanist Discussion Group <willard.mccarty at mccarty.org.uk> wrote:
>                 Humanist Discussion Group, Vol. 29, No. 174.
>            Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
>  [1]   From:    Don Braxton <don.braxton at gmail.com>                       (67)
>        Subject: Re: computational creativity
>  [2]   From:    "Patricia O'Neill" <poneill at hamilton.edu>                 (10)
>        Subject: Machines: reading, thinking, creating
> --[2]------------------------------------------------------------------------
>        Date: Sat, 18 Jul 2015 13:53:02 -0400
>        From: "Patricia O'Neill" <poneill at hamilton.edu>
>        Subject: Machines: reading, thinking, creating
> Dear Willard and James
> Why is The Turing Test a "particularly fascist" way of defining life or
> intelligence?
> I've been thinking about a new movie "Ex-Machina" which could be about a
> totalitarian scientist who lures a young computer geek to test the
> "humanity" of his latest robot. But maybe the movie is also a critique of
> the Turing Test and its particularly "sexist" approach to defining life or
> intelligence?
> How useful is the Turing Test these days in AI studies?
> Pat

More information about the Humanist mailing list