[Humanist] 29.155 machines, machines everywhere

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Jul 10 23:33:27 CEST 2015


                 Humanist Discussion Group, Vol. 29, No. 155.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Fri, 10 Jul 2015 09:26:57 +1000
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: gears


John Naughton has reminded me of the Foreword to Seymour Pappert's 
Mindstorms: Children, Computers and Powerful Ideas (1980). For thinking 
about machines and the meaning they have for us it is, I think, an 
essential reading. Thanks to the Lifelong Kindergarten at MIT, the 
Foreword itself may be found at 
https://llk.media.mit.edu/courses/readings/gears-v1.pdf. (The seasoned 
URL-choppers among us will likely discover several other items of 
interest at https://llk.media.mit.edu/courses/readings/.) Pappert's whole 
book, though poorly formatted, may be found at 
http://www.arvindguptatoys.com/arvindgupta/mindstorms.pdf.

Asking how children learn rather than what scholars want (i.e. 
know that they want) seems to me a far superior starting point. 
Neurological plasticity and the odd extraordinary experience 
along the line give me hope that not only children and the 
professional lives of developmental psychologists will benefit.

The phenomenon of "reflected analogy" that Tim Smithers has 
pointed to I prefer to think of as co-evolutionary development, 
with no implication of progress, just a 'rolling out'. Still a 
problem remains with that term, since it suggests a rolling 
out of what's already there to be rolled out. Is it? Perhaps. 
Ian Hacking's term "looping effects" (in Rewriting the Soul) 
avoids that problem but carries the implication of closed 
circularity. In any case technological history provides 
abundant evidence that we refashion ourselves from our 
inventions, as McLuhan also noted somewhere. I prefer 
to think of this as just what happens with no moral 
judgment of the process -- as long as we remain self-
aware, and so able to step back and look critically, 
speculatively at what we're doing. Let us say we are  
*as if* composed of a gazillion biological nanobots. 
What follows from that? Why are we thinking in this way?
In the moment before as-if becomes is, there's a chance 
for some insight.

Comments?

Yours,
WM
-- 
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Digital Humanities Research
Group, University of Western Sydney




More information about the Humanist mailing list