[Humanist] 23.249 saying everything that needs to be said?

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Tue Aug 25 09:22:28 CEST 2009


                 Humanist Discussion Group, Vol. 23, No. 249.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Mon, 24 Aug 2009 15:16:58 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: saying everything that needs to be said

The following is from John von Neumann, "The General and Logical Theory 
of Automata", in Cerebral Mechanisms in Behavior. The Hixon Symposium, 
ed. Lloyd A. Jeffress (1951), pp. 33-4, more specifically in the 
discussion that followed his paper. Possibly very few here are 
interested in the theory von Neumann argued we don't have, and not at 
all in neurophysiology. I quote von Neumann's comment, rather, for its 
much broader suggestiveness toward a theory of textual computing. Think, 
in particular, about markup.

> The first task that arises in dealing with any problem -- more
> specifically, with any function of the central nervous system -- is
> to formulate it unambiguously, to put it into words, in a rigorous
> sense. If a very complicated system -- like the central nervous
> system -- is involved, there arises the additional task of doing this
> "formulating," this "putting into words," with a number of words
> within reasonable limits -- for example, that can be read in a
> lifetime. This is the place where the real difficulty lies.
> 
> In other words, I think that it is quite likely that one may give a
> purely descriptive account of the outwardly visible functions of the
> central nervous system in a humanly possible time. This may be 10 or
> 20 years -- which is long, but not prohibitively long. Then, on the
> basis of the results of McCulloch and Pitts, one could draw within
> plausible time limitations a fictitious "nervous network" that can
> carry out all these functions. I suspect, however, that it will turn
> out to be much larger than the one that we actually possess. It is
> possible that it will prove to be too large to fit into the physical
> universe. What then? Haven't we lost the true problem in the process?
> 
> Thus the problem might better be viewed, not as one of imitating the
> functions of the central nervous system with just any kind of
> network, but rather as one of doing this with a network that will fit
> into the actual volume of the human brain. Or, better still, with one
> that can be kept going with our actual metabolistic "power supply"
> facilities, and that can be set up and organized by our actual
> genetic control facilities....
> 
> The problem, then, is not this: How does the central nervous system
> effect any one, particular thing? It is rather: How does it do all
> the things that it can do, in their full complexity? What are the
> principles of its organization? How does it avoid really serious,
> that is, lethal, malfunctions over periods that seem to average many
> decades?

When you drive the problem of markup to its breaking point, isn't a 
similar realisation to be had -- that the real question is being missed. 
What do you suppose is that question?

Yours,
WM
-- 
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.





More information about the Humanist mailing list