[Humanist] 31.200 'computational' &c

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Tue Jul 25 08:21:52 CEST 2017


                 Humanist Discussion Group, Vol. 31, No. 200.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Sun, 23 Jul 2017 09:06:39 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: 'computational' &c


This in response to Bill Benzon's latest.

My point about 'computational' is its metaphorical force that, I'd argue, is
still there after the metaphor is naturalised and forgotten as such. (We 
get language wrong when we say that a metaphor is 'fossilised' -- it can 
live again whereas a fossil cannot.) I worry about the subtle effects of 
thinking and speaking as if 'computational' were an entirely neutral i.e. 
meaningless term, having no whiff of the digital machine. What does 
the word mean when we refer to human reasoning as 'computational'?

We have been taking mathesis as kind of cognition for a long time, but to
take it as the whole of mental life seems to me a dangerously reductive way
to go. That's what I am objecting to. Bless Minsky for his mischievous
provocation in saying that "the brain is a machine made of meat" (or
something like that), but do we really want to take it as given without
asking what sort of a 'machine' we're talking about? Why 'machine' and 
not some other term?

I find Neisser's passage interesting because he says "in some ways akin".
His qualification of 'memory' suggests that he's alert enough to the history
of this word (see Danziger, Marking the Mind; Smith, Between Mind and
Nature) to detect the very old ghost of the storehouse metaphor and know how
limited an analogy it is, despite its rather amazing persistence. I hear in 
Neisser's passage an echo of Kelvin's repeated celebrations of his 
dependence on models for understanding the world. All I'm saying is, 
let's not forget that models are models -- and that the point is modelling.

I certainly agree with Bill that Sydney Lamb's objection to much professional 
theorising about cognitive activity is well taken -- and that "the alternative 
he proposes is not easily conveyed". This is indeed a worthy problem. 
I think Bill gets to the core when he says,

> I'm not even sure you can grasp it simply by reading his book and
> looking at the many diagrams. I think you have to actively work with
> these or similar ideas before you really understand what's going on.
> And that takes us some distance from general audience discussions of
> computation and the mind and, for that matter, it likely takes us
>  away from many/most professional philosophical discussions as well.

I suspect that where 'actively work with' takes us is to David Gooding's
idea of 'construal' in his very fine study, Experiment and the Making of
Meaning: Human Agency in Scientific Observation and Experiment (Kluwer
1990), where he takes issue with "many/most professional philosophical
discussions" of experiment in relation to theory. Not easily conveyed
indeed, which is to say, worthy of our best attention.

I suggested that our digital machines, with their "deeply buried digital
logic", nudge us to think more in some ways rather than others. I'd suggest
that we look to that logic on the one hand and how we talk about the world
on the other for clues. My own reason for suspecting we'd find the
relationship for which Aden Evens argues in Logic of the Digital is,
however, historical and has to do with the 'looping effects' (as Hacking
calls them in a different context) between inventor and invention. Consider
the history of computing from Turing 1936 (human computer as model for his
abstract machine) to McCulloch and Pitts 1943 (Turing machine as model for
the brain) to von Neumann 1945 (their neurophysiological vocabulary
identifying components of digital architecture) to that computational model
of mind. Some, with I think good reason, would call this a co-evolutionary
loop. Getting different all the time.

All-or-nothing is not new, indeed. Do we think more in those terms now,
rebel more against them now than we did? I mean this as a genuine 
question for someone with Big Data to explore.

Yours,
WM

--
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London; Adjunct Professor, Western Sydney
University and North Carolina State University; Editor,
Interdisciplinary Science Reviews (www.tandfonline.com/loi/yisr20)





More information about the Humanist mailing list