[Humanist] 23.255 saying everything

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Sat Aug 29 09:11:49 CEST 2009

                 Humanist Discussion Group, Vol. 23, No. 255.
         Centre for Computing in the Humanities, King's College London
                Submit to: humanist at lists.digitalhumanities.org

        Date: Thu, 27 Aug 2009 12:06:07 -0400
        From: David Golumbia <dgolumbia at gmail.com>
        Subject: Re: saying everything

my favorite topics again, combined in a new way i think by two especially
thoughtful commentators...i can't resist.

first, Willard quotes some really fascinating statements of von Neumann,
whose subtlety on these topics has rarely been matched even today:

> >The first task that arises in dealing with any problem -- more
> > specifically, with any function of the central nervous system -- is
> > to formulate it unambiguously, to put it into words, in a rigorous
> > sense. If a very complicated system -- like the central nervous
> > system -- is involved, there arises the additional task of doing this
> > "formulating," this "putting into words," with a number of words
> > within reasonable limits -- for example, that can be read in a
> > lifetime. This is the place where the real difficulty lies.
> >
> > In other words, I think that it is quite likely that one may give a
> > purely descriptive account of the outwardly visible functions of the
> > central nervous system in a humanly possible time.

Unless I'm misreading, von Neumann's point is not what is happening IN the
nervous system--it is providing a rigorous model of the operations of the
nervous system, presumably based on the neural network results of McCulloch
& Pitts (see the word "imitating" below)

> >I suspect, however, that it will turn
> > out to be much larger than the one that we actually possess. It is
> > possible that it will prove to be too large to fit into the physical
> > universe. What then? Haven't we lost the true problem in the process?

Here is vN's first remarkably prescient comment, in my opinion. The rough
computational approach to cognitive problems--to break it down into its
'most atomic' ingredients and then use combinatorial rules to implement a
system of 'grammar'--entails an *explosion* of description because
everything is not merely a pointer, but a "labeled" pointer--in Sausserian
terms, a signified plus a signifer plus a 2nd-level signifier pointing at
the first. And as you start to process these entities you end up adding
3rd-order labels, etc...

Clearly the brain is not doing this--and in this rough sense but one I find
consonant with other of vN's writing, the brain is not a computer, at least
not in the typical sense  of the word "computer."

> > Thus the problem might better be viewed, not as one of imitating the
> > functions of the central nervous system with just any kind of
> > network, but rather as one of doing this with a network that will fit
> > into the actual volume of the human brain. Or, better still, with one
> > that can be kept going with our actual metabolistic "power supply"
> > facilities, and that can be set up and organized by our actual
> > genetic control facilities....
> >
> > The problem, then, is  How does the brain do all
> > the things that it can do, in their full complexity? What are the
> > principles of its organization? How does it avoid really serious,
> > that is, lethal, malfunctions over periods that seem to average many
> > decades?
> When you drive the problem of markup to its breaking point, isn't a
> similar realisation to be had -- that the real question is being missed.
> What do you suppose is that question?

I'm not sure I know what you mean by "the problem of markup" here, Willard,
but I'm inferring it's something like this: when approached with any sort of
digital object, but particularly a textual object, what is our "ultimate"
goal in marking it up? what are "all" the elements we need to mark up--what
is the general schema according to which we do it?

i have been posed this problem by people who should know better in the form:
here is a poem on a page, now "mark it up."

there is no comprehensive answer to this question. markup is not doing what
language is doing, even if we believe cognition is using language altogether
(which i find doubtful). the brain does not need to markup everything in the
linguistic environment to "use" it, because that causes a computational

So then Wendell replies:

> Dear Willard,
> I am afraid that von Neumann's generalization is based on a false
> assumption that problem-solving in the world must be like the
> application of logical algorithms to abstract information processing.

exactly--except that I hope vN's point is exactly yours, and even further,
which is evident from other of vN's writing, that the brain is largely
analog--that the abstract info processing model doesn't look like it's what
the brain is doing "in reality," even if it can mimic many cognitive

> As a counter, I would offer the work of Andy Clark, whose amazing
> book Being There: Putting Brain, Body and World Together Again
> (1997), effectively refutes this entire premise and suggests many
> other ways that minds (both "natural" and "artificial") might proceed
> to work with the world, which do not entail such impossible notions
> as complete and unambiguous problem specification. I can't paraphrase
> the work adequately here. But the Wikipedia page on Clark does offer
> a suggestive account.

i could not agree more. note that Clark is a connectionist, deeply aware of
and invested in research about and using technology, and in fact has a very
recent and wonderful (maybe his best) book, Supersizing the Mind (Oxford
2008) which goes into an amazing amount of detail about technical
augmentation of the mind, the very real and material ways in which "the
mind" continues to reach outside of our heads, while completley resisting
the Kurzweilian (and imo almost entirely inchoate) belief in the
brain-as-computer, with its bizarre and (imo self-contradictory) consequence
of "uploading ourselves."

> I think the question is not how do we fully specify a problem and
> avoid ambiguity, but rather, how do we take advantage of the
> ambiguity that is inevitable and cultivate the ambiguity that is
> useful? The answer lies somewhere in the neighborhood of "feedback"
> and in the recognition that while markup has logical dimensions
> especially in its application, the correct locus for a comprehensive
> account of markup is not in logic but in rhetoric.

so back to the "markup problem." i'll suggest that there is no problem, in a
Wittgensteinian sense. markup is a kind of language game--a game with a set
of rules, meaningful only with regard to the set of rules being invoked, and
in that sense domain-, context- and use-specific.

"mark up this poem to go in our archive of 18c English lyric": done and
done--and relevant to the way we use language, but probably not to the way
the actual neurons work

"mark up this poem": give me a break


More information about the Humanist mailing list