[Humanist] 30.910 going along with the seductive semantics

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Apr 21 07:58:00 CEST 2017


                 Humanist Discussion Group, Vol. 30, No. 910.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Thu, 20 Apr 2017 08:20:10 +0000
        From: Bill Pascoe <bill.pascoe at newcastle.edu.au>
        Subject: Re:  30.907 going along with the seductive semantics?
        In-Reply-To: <20170420055013.9AD788DD6 at digitalhumanities.org>


WM,

Thankyou for your questions. They always spark a lot of thoughts and long spiels that I have to restrain myself from posting.

This casual use of words in relation to a discipline where they have a specific meaning is, as you say, a problem for IT. A philosopher may get excited when they hear the word 'ontology' for example, but is soon disappointed to learn it's just a certain way to organise information for a certain purpose and has little to do with a theory of being in silico.

The particular words you mention seem to have most to do with the tantalizing prospect of computers one day becoming in some way 'human' (learning, intelligent, cognitive, etc). As you say these words have specific meanings in IT, and after much debate about 'strong' or 'weak' AI, which we might think of as the difference between 'real' intelligence/cognition/etc and 'simulated' intelligence/cognition/etc, the field seems to have settled on working on 'weak' AI. This owes much to the popularity of the Turing Test, which can only ever be a test for 'weak' AI (and is an unsatisfactory test - just because a street performer once fooled me, as a 12 year old child, into thinking he was a robotic wax dummy, it doesn't mean he really was), and to the remarkable success and usefulness of weak AI (such as in using neural networks, modelled on the functioning of real brains, to do pattern recognition, to categorise images, and many other uses etc). But for me, the prospect of 'real' AI/cognition/semiosis/humanness is what is really interesting.

There are a few intractable problems and recurrent fears in IT, which are repeated over and over again in newspaper articles across the century, which are always expected to be realised in just a few years, but which never quite are, though there has been progress over the decades, and more often than not the attempt to achieve these goals spins off into very useful commercialisable by-products. The tantalising prospect that a computer may one day become a mind is present from the very first in Ada Lovelace's famous notes, and like any software developer speaking to their client (a few sobering connotations on 'client' there), she is compelled to quickly point out that the computer, although it can do things resembling what humans can do (such as mathematical operations) it is a machine and limited to what we humans program it to do, and in that it is limited to doing very routine tasks, but that it is none the less extraordinary in that it can do them faster and in much greater volume such that it makes possible things that human minds cannot do alone (see quote below).

Since then, the prospect has lead to many impressive simulations of human abilities, playing chess, catching oranges, etc. As we go it is not clear what exactly we mean by 'intelligence' or 'cognition' or more generally what this 'humaness' is that we would like to manifest, but as we try and fail, it becomes clearer that we do or don't mean this or that (eg: being able to play chess is not equivalent to being intelligent). Also, if we can achieve 'intelligence' it might be, as Spock would say, "It's intelligence, Jim, but not as we know it." What is missing from the Turing Test is a convincing theory of what intelligence/cognition/humaness is, and a demonstration that it can be or has been implemented in this material. Flying provides many good analogies for artificial intelligence - flight is a real thing but it's not an object, though it must be instantiated in material, and to do it, you need a good theory about how it happens. This theory is not flight itself and if this thing appears to fly because it's held up by invisible string it's not enough to say it really flies.

Quite a few years ago I developed a parsimonious theory of 'real' or 'strong' intelligence/cognition/learning/semiosis. This was very much a matter of 'standing on the shoulders of giants' since all the theoretical components are there, it's just that nobody seems to be putting two and two together, and looking at it the right way round - at least the last time I checked which was some time ago. Usually I find any thought I think original has already been claimed by another, and with so many people in the world, it must happen more and more. In any case despite one day walking through the University carpark realising that I finally understood the connections between thermodynamics, evolution, learning, cognition and semiosis, I never have found time to write it down, as at the time I thought I must be manic, and if I wasn't anyone I made such a claim to would think I was. Since then I've always been too busy working. None the less having conceived such a theory, I can now recommend, at your request, what disciplines would be most helpful:

- A good education in philosophy. Many people from sciences, engineering or software, make many assumptions and claims or fail to have insights that would immediately occur to a Philosophy graduate. What even is intelligence and how do you know? What would Aristotle or Derrida say about the robot arm?
- A good level of practical skill in software development. Many philosophers would get bogged down in trying to prove whether it is possible or not, again making many assumptions about software, false or naive. Rather the philosopher should realise that many arguments that it can't be done will be instantly refuted by actually doing it. Even if it turns out impossible, by thinking through the attempt, we learn a great deal about philosophy. The approach should be to ask, "If it were possible, how?"
- The theory, practical attempts and learning need to be combined in an ongoing feedback loop. (This process is among the things that an intelligent thing would do.)
- Thorough reading in AI, and not just the latest thinking. It needs a good understanding of the historical development and successes, failures and changes in theoretical assumptions.
- A passing knowledge of theoretical and evolutionary biology.
- Basic thermodynamics, to understand the emergence of order, complexity and dynamic systems.
- A good understanding of Romanticism, in particular the Shelleys' interest in voltaism, Frankenstein, and the concept of genius. Why does it even occur to us to make artificial intelligence? The importance of this should not be underestimated.

In short, to build an artificial human study philosophy, software development, artificial intelligence and its history, theoretical biology, evolution, thermodynamics and romantic poetry. Not a surprising list really is it?

"It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with." - Ada Lovelace,
Sketch of The Analytical Engine Invented by Charles Babbage 1842 http://www.fourmilab.ch/babbage/sketch.html
I might add that Lovelace conceived many other points that remain the same in software development such as, "To this it may be replied, that an analysing process must equally have been performed in order to furnish the Analytical Engine with the necessary operative data; and that herein may also lie a possible source of error." We now say, 'Garbage in, garbage out.'

Kind regards,

Dr Bill Pascoe
eResearch Consultant
Digital Humanities Lab
hri.newcastle.edu.au http://hri.newcastle.edu.au/
Centre for 21st Century Humanities<http://www.newcastle.edu.au/research-and-innovation/centre/centre-for-21st-century-humanities/about-us>

T: 0435 374 677
E: bill.pascoe at newcastle.edu.au

The University of Newcastle (UON)
University Drive
Callaghan NSW 2308
Australia

________________________________
> From: humanist-bounces at lists.digitalhumanities.org <humanist-bounces at lists.digitalhumanities.org> on behalf of Humanist Discussion Group <willard.mccarty at mccarty.org.uk>
> Sent: Thursday, 20 April 2017 3:50 PM
> To: humanist at lists.digitalhumanities.org
> Subject: [Humanist] 30.907 going along with the seductive semantics?

                 Humanist Discussion Group, Vol. 30, No. 907.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist<http://www.digitalhumanities.org/humanist>
                Submit to: humanist at lists.digitalhumanities.org

        Date: Wed, 19 Apr 2017 11:02:42 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: seductive semantics

Some here will know of C. S. Lewis's definition of the 'dangerous sense'
of a word, in his fine book, Studies in Words (CUP, 1967). For those who
don't have the book to hand, here's the relevant passage:

> When a word has several meanings historical circumstances often make
> one of them dominant during a particular period. Thus 'station' is
> now more likely to mean a railway-station than anything else;
> 'evolution', more likely to bear its biological sense than any other.
> When I was a boy 'estate' had as its dominant meaning 'land belonging
> to a large landowner', but the meaning 'land covered with small
> houses' is dominant now.
>
> The dominant sense of any word lies uppermost in our minds. Wherever
> we meet the word, our natural impulse will be to give it that sense.
> When this operation results in nonsense, of course, we see our
> mistake and try over again. But if it makes tolerable sense our
> tendency is to go merrily on. We are often deceived. In an old author
> the word may mean something different. I call such senses dangerous
> senses because they lure us into misreadings.  (pp. 12-13)

I've often bent his term to fit circumstances in which we carry over
words from human behaviour to computers, sometimes qualifying these
words (as in 'artificial intelligence') but sometimes not. Often what
I've been getting at is what James C. Bezdek, in "On the relationship
between neural networks, pattern recognition and intelligence", calls
"seductive semantics":

> words or phrases that convey, by being interpreted in their ordinary
> (nonscientific) use, a far more profound and substantial meaning
> about the performance of an algorithm or computational architecture
> than can be easily ascertained from the available theoretical and/or
> empirical evidence. Examples of seductive phrases include words such
> as neural, self-organizing, machine learning, adaptive, and
> cognitive. (p. 87)

Bezdek advocates rigour of definition, subject to verification, so that
terms can be directly compared to the properties and characteristics of
computational models. His call for alertness to such slippage we must
heed, of course, terribly difficult though it may be not to be taken in
by some high-octane-for-breakfast claimant and, self-seduced,
immediately feel stupid about not knowing that e.g. the latest 'machine
learning' techniques have finally made one's scepticism obsolete. But in
this slippage also gives us something quite revealing, namely the
expressions of the desire that fuels so much technological research.

We remain sceptical and alert. But what if, as designers or advisers to
them, we play along, play a what-if game, assuming that these desires
are satisfied in the sort of artificial companions depicted in so many
films these days. How, then, would one get to know them? What might we
have to learn from them? What disciplines would be the most helpful?

Comments?

Yours,
WM

--
Willard McCarty (www.mccarty.org.uk/ http://www.mccarty.org.uk/ ), Professor, Department of Digital
Humanities, King's College London; Adjunct Professor, Western Sydney
University and North Carolina State University; Editor,
Interdisciplinary Science Reviews (www.tandfonline.com/loi/yisr20<http://www.tandfonline.com/loi/yisr20>)




More information about the Humanist mailing list