[Humanist] 27.401 great works of scholarship

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Oct 4 10:28:46 CEST 2013


                 Humanist Discussion Group, Vol. 27, No. 401.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Thu, 3 Oct 2013 14:56:00 +0200
        From: Tara Andrews <taralee at alum.mit.edu>
        Subject: Re:  27.370 great works of scholarship
        In-Reply-To: <20130924075351.50D8E306C at digitalhumanities.org>

Dear Willard,

On the subject of software systems and how we can comprehend and
evaluate them, you write:

> But when software systems grew beyond the
> ability of any one person to hold the whole thing in mind -- around the time
> when the DEC-10 operating system was superseded, I'd guess --  then I
> suspect that we could no longer say that such systems could be known for
> what they might do, however legible individual components might be.

Moving away from software for a moment, if we imagine each field
within which we work as a system of its own, with its own axioms and
emphases and consequences - I am reminded of a discussion a few years
ago concerning the future of my own 'home' subject, Byzantine studies,
a field in which it is notoriously difficult to find an academic post
even by the standards of the humanities as a whole. A recently-retired
luminary of the field, invited to give his impressions, stated that
the best thing about Byzantine history was that, after many years of
study, a single individual could indeed aspire to comprehend the
entirety of Byzantium and the world around it. When I heard this I
found it deeply troubling, threatening even, and I hoped desperately
that no funding bodies or hiring committees would ever hear it said.
If a single scholar can indeed comprehend the whole of Byzantine
history, then what point is there in employing multiple people to try?
What need is there for anyone to take it seriously as a field?

In fact Byzantium and its milieu is, like any other period of history,
far too vast and complex for myself or any other single person to
understand it all in both breadth and depth. We cannot examine
everything ourselves, and in order to continue with our research,
whether as historians, archaeologists, literary scholars, scholars of
art, or whatever else, we have always had to rely to some extent on
exactly the sort of conceptual 'black boxes' that are causing such
consternation in DH at the moment. I am a historian first and a
textual scholar second, and claim no expertise at all in the other
fields; I do not have the practical capacity, for example, to engage
with the axioms and methodological theory of art history. To what
extent, then, should I incorporate the findings and conclusions of art
historians into my own research? To what extent should an
archaeologist turn her talents to textual historical source forensics,
which is the sort of thing I do, in order to carry on with the
archaeology at hand?

And so we come back to software systems, which have grown to be large
enough that we cannot claim fully to comprehend their entirety, and
this makes some of us uncomfortable. But to me this seems to be the
same situation as we have always faced, in Byzantine studies at least.
I have to take on faith that the database software works as
advertised, that the XML parser does its job properly, that the
Javascript libraries for manipulating SVG will put the correct image
in the web browser. As I work with these things to build my own
system, of course, I have some expectations concerning the results I
will see, and if I see something substantially different then I will
pause and try to bridge the gap between expectation and reality.

But here I suppose we might come to a difference between historical
disciplines and software. We understand (more or less) the conclusions
of art historians or archaeologists or textual scholars by reading
their arguments and reflecting on them, applying the conclusions to
our mental models and considering the consequences. In theory we could
do the same with computer code, of course - read it, work it through
in our heads, understand the argument, even read the source code for
the sub-systems upon which our code depends - but we know that in
practice no one does this for more than a very small subset of any
given program. And no wonder - code is not a good medium for
human-to-human communication! This is why we are constantly nagging
software developers to document, document, document. By and large it
is the documentation that sets our expectations for what the code
does. Only then do we grasp its logic and its argument, and only then
can we put it to the test. In a way, of course, this is very similar
to the example you give of the glosses to Martianus Capella. In theory
we could take the enormous trouble to replicate the scholarship, but
in practice we rely on the introduction - the documentation - to set
our expectations.

(Another question that arises from this, somewhat tangential to the
main question, is this: will it ever be possible to create a symbolic
language for the expression of humanities concepts in computable ways?
This is an idea that I have heard from Joris van Zundert - in fact,
something of it was expressed in the final Interedition bootcamp in
2012 - and the concept reminds me very much of the dream of Leibniz to
have a 'universal characteristic'. But that is a very large discussion
all of its own.)

The other difference that arises with software comes in how we put the
work to the test. Where in the historical disciplines we read and
reflect, using the new information as a new set of mental building
blocks, in computing we take the 'building block' metaphor rather more
literally. We try to build something. If it doesn't do as we expect,
then we take a closer look at the code and the documentation, and try
to work out whether the fault lies in our expectations or in the
building block itself.

Perhaps the dissonance we are seeing arises from the feeling that, as
long as we are making things, we are not evaluating them or critiquing
them? That seems, at any rate, to correspond with the 'tensions
between theoretical critique and productive theory' that was observed
by Katherine Hayles. I do think that in this case it is a false
dichotomy.

And so I will come back to your overarching question and give an
answer, insofar as it has worked for me up to now. How do we evaluate
software for the substance of its contribution to a field; how do we
say whether it is a great work of scholarship? We test its rhetoric -
read the documentation - against what we know; we test its substance -
use the software - to create our own structures of knowledge, and see
how well they stand up. And I do believe that we have to do both.

Finally, maybe this is why peer review of scholarly works of software
poses such a thorny challenge for us. The amount of time and effort it
takes to read an article and think about its implications tends to
exactly fill the amount of time we have to devote to the task. The act
of making proper use of a piece of software, on the other hand, and
especially the act of incorporating it into something else we build,
has a rather higher minimum cost in terms of time and effort. The
number of people who are in a position to provide a good review of any
particular piece of scholarly software - those who actually have a use
for the software (or at least have suitable digital data on hand to
experiment with) as opposed to those who might be able to spare a
little theoretical consideration but no more - within a reasonable
time frame is necessarily going to be much smaller as a result.

Best wishes,
-tara

--
Tara L Andrews
Assistenzprofessorin in Digital Humanities
Universität Bern, Institut für Klassische Philologie
Länggassstrasse 49, CH-3000 Bern 9
Büro: Gesellschaftsstrasse 2, 237C
tel +41 31 631 34 49 / fax +41 31 631 44 86





More information about the Humanist mailing list