[Humanist] 29.185 machines: reading, thinking, creating
Humanist Discussion Group
willard.mccarty at mccarty.org.uk
Thu Jul 23 03:05:40 CEST 2015
Humanist Discussion Group, Vol. 29, No. 185.
Department of Digital Humanities, King's College London
Submit to: humanist at lists.digitalhumanities.org
Date: Wed, 22 Jul 2015 12:35:39 +1000
From: Willard McCarty <willard.mccarty at mccarty.org.uk>
Subject: open and closed
Don Braxton's continuing provocation to think about the shocks present
and in store for us from machines I find most welcome. The hedge-talk of
political guardianship among the humanities that he speaks of is a huge
problem, though hardly a surprise. It's a problem for us because we
work across disciplines. And it is, I think, a rear-guard action in a time
when online publication and distribution mechanisms more than allow
anyone from any discipline to listen in to the conversations going on
elsewhere. True, that listening in can be exceedingly demanding if
you actually try to understand the context, but still it is easily begun.
The form of this problem that I find most interesting is brought on by
convergences between the humanities and the sciences thanks
among other things to the technoscientific machine we've adopted.
These convergences spook a number of us in digital humanities and
beyond. Thus unsettled we reassure ourselves with such clubby talk
of how superior we are, as Don says. Some of this clubby talk is
very clever, very sophisticated, but it does anything but help.
The insecurity just beneath the surface of the reactionary reactions
is obvious. But here things get especially interesting. I like to ask,
what is at the bottom of this insecurity? What tender spot is
technoscience poking, increasingly vigorously these days? Why
do some of us try to denegrate mathesis (Foucault's science of
calculable order) as if this were necessary to honour poiesis
(the bringing forth of things)? As an old friend of mine used to say,
you don't get taller by cutting off the legs of others.
When computing was new, signs of insecurity were more visible
than they are now, but the problem has only gone underground,
and not far under. In my experience it comes out when you talk to
colleagues not about marvellous new tools but about what happens
to research in their areas when these tools are applied critically,
not just to make the work they already do faster & more convenient
but to question it fundamentally. Again disciplinary invasion warnings
are triggered, fears awakened and immune systems engaged.
What *is* the nature of what we're playing at with our new toys? What
*is* this sandbox (of the human) that we're in? Has anyone noticed
that its limits are very blurry indeed? I recommend Evelyn Fox Keller's
many writings about the effects of computational biology and biological
computing on our idea of the human. We are, as Don says, in a
position of power -- but mostly don't know that.
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Digital Humanities Research
Group, University of Western Sydney
More information about the Humanist