[Humanist] 30.907 going along with the seductive semantics?
Humanist Discussion Group
willard.mccarty at mccarty.org.uk
Thu Apr 20 07:50:13 CEST 2017
Humanist Discussion Group, Vol. 30, No. 907.
Department of Digital Humanities, King's College London
Submit to: humanist at lists.digitalhumanities.org
Date: Wed, 19 Apr 2017 11:02:42 +0100
From: Willard McCarty <willard.mccarty at mccarty.org.uk>
Subject: seductive semantics
Some here will know of C. S. Lewis's definition of the 'dangerous sense'
of a word, in his fine book, Studies in Words (CUP, 1967). For those who
don't have the book to hand, here's the relevant passage:
> When a word has several meanings historical circumstances often make
> one of them dominant during a particular period. Thus 'station' is
> now more likely to mean a railway-station than anything else;
> 'evolution', more likely to bear its biological sense than any other.
> When I was a boy 'estate' had as its dominant meaning 'land belonging
> to a large landowner', but the meaning 'land covered with small
> houses' is dominant now.
> The dominant sense of any word lies uppermost in our minds. Wherever
> we meet the word, our natural impulse will be to give it that sense.
> When this operation results in nonsense, of course, we see our
> mistake and try over again. But if it makes tolerable sense our
> tendency is to go merrily on. We are often deceived. In an old author
> the word may mean something different. I call such senses dangerous
> senses because they lure us into misreadings. (pp. 12-13)
I've often bent his term to fit circumstances in which we carry over
words from human behaviour to computers, sometimes qualifying these
words (as in 'artificial intelligence') but sometimes not. Often what
I've been getting at is what James C. Bezdek, in "On the relationship
between neural networks, pattern recognition and intelligence", calls
> words or phrases that convey, by being interpreted in their ordinary
> (nonscientific) use, a far more profound and substantial meaning
> about the performance of an algorithm or computational architecture
> than can be easily ascertained from the available theoretical and/or
> empirical evidence. Examples of seductive phrases include words such
> as neural, self-organizing, machine learning, adaptive, and
> cognitive. (p. 87)
Bezdek advocates rigour of definition, subject to verification, so that
terms can be directly compared to the properties and characteristics of
computational models. His call for alertness to such slippage we must
heed, of course, terribly difficult though it may be not to be taken in
by some high-octane-for-breakfast claimant and, self-seduced,
immediately feel stupid about not knowing that e.g. the latest 'machine
learning' techniques have finally made one's scepticism obsolete. But in
this slippage also gives us something quite revealing, namely the
expressions of the desire that fuels so much technological research.
We remain sceptical and alert. But what if, as designers or advisers to
them, we play along, play a what-if game, assuming that these desires
are satisfied in the sort of artificial companions depicted in so many
films these days. How, then, would one get to know them? What might we
have to learn from them? What disciplines would be the most helpful?
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London; Adjunct Professor, Western Sydney
University and North Carolina State University; Editor,
Interdisciplinary Science Reviews (www.tandfonline.com/loi/yisr20)
More information about the Humanist