[Humanist] 24.82 reviewing digital scholarship

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Thu Jun 3 07:05:56 CEST 2010


                  Humanist Discussion Group, Vol. 24, No. 82.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Wed, 02 Jun 2010 09:13:28 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: reviewing digital scholarship


In his Brainstorm article, "Reviewing Digital Scholarship", Stan Katz
notes the problem of reviewing born-digital material that is formally
distinguishable from books and articles, i.e, primarily digital
resources that take the form of databases. He identifies "the more
urgent need... for all of the major humanities journals to develop the
interest and capacity for reviewing digital scholarship as a matter of
course". Let me propose an analogy with which to think about this
recommendation. (I must declare at the outset that in this as in so many
other areas of scholarship, I only observe what others do.)

When, for example, a manuscript scholar publishes an edition, say of the
Latin glosses to Martianus Capella's De nuptiis Philologiae et Mercurii
(On the Marriage of Philology and Mercury) as manifested in extant
9th-century manuscripts, where is it reviewed? By whom is it reviewed? How?

I think the answer to the first question would be, in journals which
specialise in manuscript studies, not in journals, say, of early
medieval history -- except for any introductory chapters the edition
might contain, chapters addressing methods and implications of the
study, which might well have appeared in the latter sort of journals.
And by whom is the edition reviewed? Best, of course, by someone who had
seen some if not many or even all of the manuscripts in question.
Otherwise a reviewer who understood the text and the period in which it
was glossed might raise questions about surprising results from the work
but wouldn't be able to do much more, I'd suppose. And that answers the
last question as well -- the edition would have to be reviewed by
inspection of the evidence itself, in some cases only by seeing the
actual manuscripts concerned.

There's also the problem of time. Working in manuscript studies is a
slow business, and one can expect an edition to be properly appreciated
only after quite a long time has elapsed, during which it has been used
and so its subtleties appreciated.

So, my question about digital resources, i.e. databases and the like.
Who will be able to get down to where the decisions are embedded in
software? What sort of discussion would this getting down entail? Who
would be able to understand it? And so in what sort of publication would
it appear? How is this going to happen quickly enough that the review,
when it appears, still addresses something people use? Who will be able
to afford the time it takes to review a resource properly, critically?

Does this mean, then, that historians, say, must take the technical work and
all those embedded decisions on faith? I ask here the question that John
Burrows, for example, asks of work in stylometry: how much confidence can
scholars have in the work? I would think (from watching Burrows for some
years) that the confidence is built in part from the reasonableness of the
conclusions but more from his patient accounts, requiring patience and
attention to follow, of how he works, the stages of gradual advance from
what we already know toward what we don't. But then with Burrows' sort of
work, no one has to travel anywhere, for example to St Petersburg, then
negotiate the bureaucracy of the state library, or to the BNF in Paris and be
told that he or she has seen a certain manuscript the maximum number of
times allowable by the rules. Anyhow, you get the idea.

From all this I conclude, once again, that presenting the digital object
isn't enough. One has to present and reflect on the process -- which means
among other things paying attention as participant observer while the work
is going on. So we're talking here about a different sort of reviewing for a
different sort of scholarly object, presenting quite different problems from
those encountered previously.

If there's anyone here with experience of simulation in the physical
sciences, it might be useful to have some commentary on how confidence in
simulations is built, where the authority comes from. Partly, I suppose, it
would come from the researcher, the lab. How is consensus built up? When one
builds a model of what one understands a physical system to be, then turns
it loose and observes phenomena not otherwise observable -- say, processes
at the centre of a star -- how much can one depend on these (simulated)
phenomena? How closely does one resemble Wily E. Coyote, who runs off a
cliff and is fine until he looks down? (See e.g.
http://viper.haque.net/~timeless/blog/144/coyote-06.jpg or
http://libweb5.princeton.edu/Visual_Materials/gallery/animation/jpeg/animation3.jpeg.)

Comments?

Yours,WM
--
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.






More information about the Humanist mailing list