[Humanist] 27.370 great works of scholarship

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Tue Sep 24 09:53:51 CEST 2013

                 Humanist Discussion Group, Vol. 27, No. 370.
            Department of Digital Humanities, King's College London
                Submit to: humanist at lists.digitalhumanities.org

        Date: Mon, 23 Sep 2013 10:18:36 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: great works and black boxes

In response to Tara Andrews about software systems as black boxes, and so
their impenetrability to inspection: I wasn't saying that code could not be
understood by reading it. Clearly it can. As a youth writing Fortran and
assembler language for some big machines I spent many hours reading and
sometimes understanding code. But when software systems grew beyond the
ability of any one person to hold the whole thing in mind -- around the time
when the DEC-10 operating system was superseded, I'd guess --  then I
suspect that we could no longer say that such systems could be known for
what they might do, however legible individual components might be. The
distinction I was drawing with mathematics was one between code that states
("x equals 1") and code that commands something be done ("set x equal to
1"). When the balance of a work shifts from the former to the latter, and
becomes complex in the technical sense, then we have to refigure how we
understand and judge it, yes?

I was asking from the perspective of a scholarly user of a large online
system, how can he or she tell what it is doing? How could anyone know
whether such a system is a great work of scholarship -- as opposed, say, to
a great resource with which to do scholarship but which itself is not a work
of scholarship, or which puts its critical, scholarly work beyond the
inspection of the user? 

Let's consider, say, a scholarly edition of a work, such as an edition of
the glosses to Martianus Capella, De nuptiis, recently published in the
Corpus Christianorum series. This edition, I happen to know, was critically
compiled from 25 widely distributed mss, involving considerable travel and
then detailed work over several years. In practical terms no one except the
editor, or very, very few other scholars, will ever see all 25 mss, until
all the libraries holding these mss digitize them. (Don't hold your breath.)
So how does a reviewer tell that the new edition is a great work of
scholarship, if it is? The editor, since she knew what she was doing,
worried about this, and so made sure that the prose introduction on the mss
was as complete and thorough as possible, that the Latin was correct, that
every possible clue was provided, all of what she had learned about the mss
from examining them and her reasoning processes explained. The publishing
house, Brepols, made sure no turned stone was left undescribed. Still, of
course, much remains necessarily hidden from the sight of the rest of us,
but the amount of explanation is quite astonishing.

What do the makers of our great works of digital scholarship analogously do
to allow them to be as thoroughly known?

I'm not saying our works of digital scholarship, great or otherwise, fail in
this regard. Rather I was trying to get us beyond making unsubstantiated
claims of greatness. I was saying in effect, you cannot tell a book by its
cover, then (to follow the metaphor) asking how far beyond the cover -- and
table of contents and index -- can we get with an online system as it is
presented to the ordinary user? A good software review will tell us whether
the thing does as advertised, and it will also make informed judgments as to
whether the design fits the purpose. Doesn't saying that a system is a great
work of scholarship reach considerably further than that?


Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Research Group in Digital
Humanities, University of Western Sydney

More information about the Humanist mailing list