[Humanist] 31.198 'computational' and the all-or-nothing

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Mon Jul 24 07:19:28 CEST 2017


                 Humanist Discussion Group, Vol. 31, No. 198.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    William L. Benzon <bbenzon at mindspring.com>                (38)
        Subject: Re:  31.195 'computational' and the all-or-nothing

  [2]   From:    lachance at chass.utoronto.ca                                (24)
        Subject: Re:  31.195 'computational' and the all-or-nothing


--[1]------------------------------------------------------------------------
        Date: Sat, 22 Jul 2017 15:02:03 -0400
        From: William L. Benzon <bbenzon at mindspring.com>
        Subject: Re:  31.195 'computational' and the all-or-nothing
        In-Reply-To: <20170722071641.69FD22ED2 at digitalhumanities.org>


Hi Willard,

I’m still not sure what your problem is with “computational.” As I've have no particular investment in the Cosmides & Tooby. It’s not something I’d have said. As for the Neisser:

> In other words, I am arguing that the words we use matter. Indeed I agree that Ulrich Neisser's subtler language does indeed have the matter "about right" -- though I'd underscore the 'about' and demand that it not be taken as synonymous with 'just' -- but still think that Cosmides' and Tooby's does not. Except in quite special circumstances we do not speak in propositional sentences, and unless we become dogmatic to the core we certainly do not think we think in them except provisionally, to see how close they come.

I’m glad you find it better. From my POV, it’s pretty neutral, but you still seem to have problems with it. It’s almost as though you want to allow cognitive science, but want to bracket out the idea that computing has anything to do with it.

Now, the fact is, I do have problems with the Neisser, so let me first repeat the passage:

> ... the activities of the computer itself seemed in some ways akin to cognitive processes. Computers accept information, manipulate symbols, store items in “memory” and retrieve them again, classify inputs, recognize patterns, and so on. Whether they do these things just like people was less important than that they do them at all. The coming of the computer provided a much-needed reassurance that cognitive processes were real; that they could be studied and perhaps understood.

It’s the bit about storing and retrieving that I find a bit problematic, though note that Neisser did place “memory” in scare quotes. I’ll let Sydney Lamb voice my objections. As you may know, Lamb was at Berkeley back in the 50s and was among the first generations of researchers on machine translation. In 1999 he published Pathways of the Brain: The Neurocognitive Basis of Language (Amsterdam & Philadelphia: John Benjamins), which stands as his main statement of his views on language. On page 3 of his introduction he says this:

> Some years ago I asked one of my daughters, as she sat at the piano, "When you hit that piano key with your finger, how does your mind tell your finger what to do?" She thought for a moment, her face brightening with the intellectual challenge, and said, "Well, my brain writes a little note and sends it down my arm to my hand, then my hand reads the note and knows what to do." Not too bad for a five-year old.

He then goes on to suggest that an awful lot of professional thinking about the brain takes place in such terms (p. 2):

> This mode of theorizing is seen in ... statements about such things as lexical semantic retrieval, and in descriptions of mental processes like that of naming what is in a picture, to the effect that the visual information is transmitted from the visual area to a language area where it gets transformed into a phonological representation so that a spoken description of the picture may be produced....It is the theory of the five-year-old expressed in only slightly more sophisticated terms. This mode of talking about operations in the brain is obscuring just those operations we are most intent in understanding, the fundamental processes of the mind.

So here we’ve got those italicized words and phrases: “lexical semantic retrieval”, "information is transmitted”, “transformed”, and “phonological representation ”. Neisser talks like that; a lot of cognitivists do. I think Lamb’s objections are well-taken, but the alternative he proposes is not easily conveyed. I’m not even sure you can grasp it simply by reading his book and looking at the many diagrams. I think you have to actively work with these or similar ideas before you really understand what’s going on. And that takes us some distance from general audience discussions of computation and the mind and, for that matter, it likely takes us away from many/most professional philosophical discussions as well.

Here’s another, somewhat different passage. This is by Peter Gärdenfors (Conceptual Spaces 2000, p. 253):

> On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computationally more complex than the process that is simulated.

If you read it closely, and compare it with Lamb, you’ll see that there is some inconsistence between the two (Lamb wouldn’t talk of searching and matching). I can live with that. What I like is that Gärdenfors talks of three “levels” – symbolic, conceptual, subconceptual – with different kinds of processing for each. I’m not committed to those three levels but, yes, different kinds of things are going on. Gärdenfors’ last sentence is the most interesting one.

I don’t see how you can come to appreciate what Lamb or Gärdenfors are up to if you’re always putting “as-if” in front of “computing”. 

Over to all-or-nothing:

[snip]

> Similarly, about the effects of deeply buried digital logic, Bill's word 'responsible' (as in causitive?) I think is far too strong. I'm not wanting to suggest uni-directional causation. I've not yet figured out how to talk about the relationship between co-occurrent historical phenomena that somehow have something to do with each other. 

It seems to me, Willard, that perhaps you’re jumping the gun here by simply assuming a “deeply buried digital logic” when that logic might not be there. Maybe it’s just coincidence. I think we need to look at these things case by case.

Let me offer a different case. A few years ago Tara McPherson published an argument about race and the UNIX operating system during the 1960s [1]. Invoking the idea of modularity she argued for a deep connection between the two, but denied that it was causal in either direction or in both directions. She talked of race as some kind of "operating system.” But, more generally, she talked of resonance. But all she’s actually got is a resemblance she established through her arguments.

I think the argument is ingenious, even brilliant, but also empty. As I remarked at the end of a long blog post [2]:

> McPherson’s argument reads best as a critical analysis of a sprawling Pynchonesque novel. In that case everything in the novel is the product of the author’s mind, though not necessarily the result of conscious deliberation. Why, in this particular novel, do certain computational processes and certain social processes resemble one another? Because that’s what the author wrote. The fact that these two things resemble one another in the novel is thus “insulated” from, at a distance from, the dynamics of the real world.

You say:

> And I certainly don't want to be understood as arguing that all-or-nothing logic is new. Thinking like that is surely as old as life. Rather I am suggesting that the binary logic of our machines nudges us in certain directions, reinforces something else that's doing it, co-manifesting a tendency of our time. 

But what reason do you have to think that other than the wide-spread existence of all-or-nothing phenomena?

Yours,

BB

[1] Tara McPherson, “U.S. Operating Systems at Mid-Century: The Intertwinning of Race and UNIX”, in Race After the Internet, Lisa Nakamura, Peter Chow-White, and Alondra Nelson, eds., New York, Routledge (2011) pp. 21-37. There’s a PDF online here: http://history.msu.edu/hst830/files/2014/01/McPherson_2012.pdf  http://history.msu.edu/hst830/files/2014/01/McPherson_2012.pdf
For a brief informal statement, see her Henry Jenkins interview, March 20, 2015: Bringing Critical Perspectives to the Digital Humanities: An Interview with Tara McPherson (Part Three): http://henryjenkins.org/2015/03/bringing-critical-perspectives-to-the-digital-humanities-an-interview-with-tara-mcpherson-part-three.html <http://henryjenkins.org/2015/03/bringing-critical-perspectives-to-the-digital-humanities-an-interview-with-tara-mcpherson-part-three.html>

[2] William Benzon, Transcendental Critique? The peculiar case of UNIX and race in America in the 1960s, New Savanna (blog), December 30, 2016, https://new-savanna.blogspot.com/2016/12/transcendental-critique-peculiar-case.html <https://new-savanna.blogspot.com/2016/12/transcendental-critique-peculiar-case.html>
Bill Benzon
bbenzon at mindspring.com

646-599-3232

http://new-savanna.blogspot.com/  http://new-savanna.blogspot.com/
http://www.facebook.com/bill.benzon  http://www.facebook.com/bill.benzon
http://www.flickr.com/photos/stc4blues/  http://www.flickr.com/photos/stc4blues/
https://independent.academia.edu/BillBenzon <https://independent.academia.edu/BillBenzon>
http://www.bergenarches.com/#image1  http://www.bergenarches.com/#image1



--[2]------------------------------------------------------------------------
        Date: Sun, 23 Jul 2017 19:39:37 -0400 (EDT)
        From: lachance at chass.utoronto.ca
        Subject: Re:  31.195 'computational' and the all-or-nothing
        In-Reply-To: <20170722071641.69FD22ED2 at digitalhumanities.org>

Willard

It was the combination of "buried digital logic" and the "co-current
historical phenomena" that put me in mind of N. Katherine Hayles. She
evokes often feedback loops to explain the relationships between what she
terms metaphors and means. in <i>My Mother Was a Computer</i> she has a
fine formulation about the deeply intertwined planes. She links (the
quotation marks are hers) "what we make" and "what (we think) we are".
Does this begin to help?
>
> Similarly, about the effects of deeply buried digital logic, Bill's word
> 'responsible' (as in causitive?) I think is far too strong. I'm not
> wanting to suggest uni-directional causation. I've not yet figured out
> how to talk about the relationship between co-occurrent historical
> phenomena that somehow have something to do with each other.
> I try to avoid using the term 'Zeitgeist'. And I certainly don't want to
> be understood as arguing that all-or-nothing logic is new. Thninking
> like that is surely as old as life. Rather I am suggesting that the binary
> logic of our machines nudges us in certain directions, reinforces
> something else that's doing it, co-manifesting a tendency of our time.
> You see I really haven't figured out how to say this. Help, anyone?

-- 
Francois Lachance
Scholar-at-large
http://www.chass.utoronto.ca/~lachance





More information about the Humanist mailing list