[Humanist] 30.913 going along with the seductive semantics

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Sat Apr 22 08:50:48 CEST 2017


                 Humanist Discussion Group, Vol. 30, No. 913.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Fri, 21 Apr 2017 13:18:44 +0200
        From: Tim Smithers <tim.smithers at cantab.net>
        Subject: Re:  30.907 going along with the seductive semantics?
        In-Reply-To: <20170420055013.9AD788DD6 at digitalhumanities.org>


Dear Willard,

Several things.

C S Lewis

I would dispute Lewis's description of what happens whenever
we meet a word, and I wonder if others here would dispute this
too?

We don't meet each word as it arrives, as Lewis seems to
suggest.  We accumulate the words as they arrive.  Though they
may arrive in a serial fashion, via some listening or reading,
we quickly start working (unconsciously) with all we have so
far, so that, as we might describe it, context and expression
and other things influence how particular multiple-meaning
words might be taken up in the understanding of what is being
said (or written).  This does not, of course, avoid mistaken
meanings of particular individual words, but, to me at least,
it feels more like what happens, unlike ideas of having
ordered stacks of meanings for each word, from primary down to
less primary, from which we must select the most appropriate
on a word-by-word basis as they arrive, which feels very
strange.  How well the selected meaning of a word works is not
somehow like how well each individually fits into the current
state of the jigsaw puzzle of understanding.  It is, I would
suggest, more like how well it makes the whole image of
understanding (consciously) look like as we (unconsciously)
build it.

And, while we're here, nor, I would say, is the production
side like a Lewis-ian idea of word selection indexed on
individual word meaning stacks.

We don't pick words for what they mean, to say what we want to
say.  We use words that come to hand (or come to mouth) as we
try to write (or say) something we want to write (or say).  We
see (or hear) how these thrown-together words work, to
ourselves, and others who also hear or see them, then often
change them, so as to write (or say) better what we now
realise we want to try to say, and discover how we might say
it.  And so it goes on, round again, or on to the next thing
we want to say, all the time trying to be careful to say to
the other what we discover we want to say from trying to say
it.  If you see what I mean.

Words are, I think, (collectively) better thought of as a kind
of stuff from which we try to render things we want to say.
It's more like forming the clay on the wheel into the shape we
discover we can form, and discover we'd like to form.  Writing
(or speaking) is not, I think, like stringing together words
from a dictionary to implement some pre-formed specification
of what we want to say.  At least, this is how it does and
doesn't feels to me.  To others, how does it seem?

James Bezdek 

The Bezdek (1992) paper [1] is, I think, much more
interesting, and identifies well some important issues often
neglected in AI (and similarly, I would say, in other fields
too).  Given the "feeding frenzy" (Bezdek's nice term)
surrounding (so called) Deep Learning these days, this paper
is worth re-reading, and not just by AI people; by everybody.
(As is usual with interesting things, you don't need to
understand it all to get plenty of good things from it.  It's
OK for words to go by without meanings sometimes.)

Another even older AI paper about how we call things, and very
much worth re-reading is, I think:

 Artificial intelligence meets natural stupidity by Drew
 McDermott, in ACM SIGART Bulletin, Issue 57, April 1976,
 Pages 4--9.  [2]

McDermott starts this off saying

 "As a field, artificial intelligence has always been on the
  border of respectability, and therefore on the border of
  crackpottery."

The borderline crackpottery seems to have oozed out of AI
some, into other parts these days.

Like here, for example: "artificial companions."  Really?
Isn't this more dangerously seductive semantics?

With film, as with other fiction making stuff, we can depict
almost anything we can imagine, even real human companions
made or non-human stuff.  But in the films I think you refer
to (but don't cite) this term works only as a poor description
of what is really portrayed.  Take an example.  Still one of
the best, I suggest.  HAL, in 2001: A Space Odyssey (Kubrick
and Clarke, 1969).  HAL is not a companion to the humans
because it doesn't, and can't, engage in human forms of
conversation, in which the typical working out of how to say
what each conversing human discovers they want to say,
happens.  The dialogues with HAL are all the (now) typical
"computer like" dialogues.  And, of course, they were made to
be like this by Kubrick and Clarke, because, for Kubrick and
Clarke, the kind of ordinary dialogue between human companions
was beyond even HAL. For something more like human
companionship you need to go for Cyborg type "artificial
companions," such as in The Terminator (James Cameron, 1984),
or the Replicants in Blade Runner (Ridley Scott, 1982).  These
aren't just AIs, they are depictions of (full or patched up)
human replications.  Or, are you happy to call the Samantha,
that Theodore Twombly "falls for" in Her (Spike Jonze, 2013),
an artificial companion?  To sustain the depiction of Samantha
as an AI in this film, all the dialogue between it and Twombly
has to have a sufficient degree of the stilted "computer-like"
feel we now know as "computer talking."  Companionship for
humans, I submit, is a whole lot more than just being able to
talk in a human language.  Have we humans really got as far
has having companionable conversations with our Siris et al?
Or, does it really look like we are on the way there?  It
doesn't to me.  But may be I've been blinded by my AI
crackpottery.

Best regards,

Tim ...  Hey, Siri!  What's an artificial companion? ...

References

[1] James C Bezdek, 1992: On the Relationship Between
    Neural Networks, Pattern Recognition and Intelligence,
    International Journal of Approximate Reasoning, Vol 6, pp
    85--107, Elsevier Science Publishing Co, Inc.
    PDF available here
    <http://ac.els-cdn.com/0888613X9290013P/1-s2.0-0888613X9290013P-main.pdf?_tid=06692f64-266b-11e7-b612-00000aacb35d&acdnat=1492762857_861ba5c4d69accd94d24c95939433a3b>

[2] Free to down load here
    <http://homepage.univie.ac.at/nicole.rossmanith/concepts/papers/mcdermott1976artificial.pdf>

> On 20 Apr 2017, at 07:50, Humanist Discussion Group <willard.mccarty at mccarty.org.uk> wrote:
> 
>                 Humanist Discussion Group, Vol. 30, No. 907.
>            Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
> 
> 
> 
>        Date: Wed, 19 Apr 2017 11:02:42 +0100
>        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
>        Subject: seductive semantics
> 
> 
> Some here will know of C. S. Lewis's definition of the 'dangerous sense' 
> of a word, in his fine book, Studies in Words (CUP, 1967). For those who 
> don't have the book to hand, here's the relevant passage:
> 
>> When a word has several meanings historical circumstances often make
>> one of them dominant during a particular period. Thus 'station' is
>> now more likely to mean a railway-station than anything else;
>> 'evolution', more likely to bear its biological sense than any other.
>> When I was a boy 'estate' had as its dominant meaning 'land belonging
>> to a large landowner', but the meaning 'land covered with small
>> houses' is dominant now.
>> 
>> The dominant sense of any word lies uppermost in our minds. Wherever
>> we meet the word, our natural impulse will be to give it that sense.
>> When this operation results in nonsense, of course, we see our
>> mistake and try over again. But if it makes tolerable sense our
>> tendency is to go merrily on. We are often deceived. In an old author
>> the word may mean something different. I call such senses dangerous
>> senses because they lure us into misreadings.  (pp. 12-13)
> 
> I've often bent his term to fit circumstances in which we carry over 
> words from human behaviour to computers, sometimes qualifying these 
> words (as in 'artificial intelligence') but sometimes not. Often what 
> I've been getting at is what James C. Bezdek, in "On the relationship 
> between neural networks, pattern recognition and intelligence", calls 
> "seductive semantics":
> 
>> words or phrases that convey, by being interpreted in their ordinary
>> (nonscientific) use, a far more profound and substantial meaning
>> about the performance of an algorithm or computational architecture
>> than can be easily ascertained from the available theoretical and/or
>> empirical evidence. Examples of seductive phrases include words such
>> as neural, self-organizing, machine learning, adaptive, and
>> cognitive. (p. 87)
> 
> Bezdek advocates rigour of definition, subject to verification, so that 
> terms can be directly compared to the properties and characteristics of 
> computational models. His call for alertness to such slippage we must 
> heed, of course, terribly difficult though it may be not to be taken in 
> by some high-octane-for-breakfast claimant and, self-seduced, 
> immediately feel stupid about not knowing that e.g. the latest 'machine 
> learning' techniques have finally made one's scepticism obsolete. But in 
> this slippage also gives us something quite revealing, namely the 
> expressions of the desire that fuels so much technological research.
> 
> We remain sceptical and alert. But what if, as designers or advisers to 
> them, we play along, play a what-if game, assuming that these desires 
> are satisfied in the sort of artificial companions depicted in so many 
> films these days. How, then, would one get to know them? What might we 
> have to learn from them? What disciplines would be the most helpful?
> 
> Comments?
> 
> Yours,
> WM
> 
> -- 
> Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
> Humanities, King's College London; Adjunct Professor, Western Sydney
> University and North Carolina State University; Editor,
> Interdisciplinary Science Reviews (www.tandfonline.com/loi/yisr20)





More information about the Humanist mailing list