[Humanist] 26.605 XML, TEI and what kind of scholarship?

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Wed Dec 19 08:41:23 CET 2012


                 Humanist Discussion Group, Vol. 26, No. 605.
            Department of Digital Humanities, King's College London
                              www.dhhumanist.org/
                Submit to: humanist at lists.digitalhumanities.org



        Date: Tue, 18 Dec 2012 09:21:46 +0000
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: XML, TEI and what kind of scholarship?


My King's colleague Elena Pierazzo's message several days ago drew much
needed attention to the disciplinary perspective from which the question
of markup is considered. She made the valuable point that systematic
markup offers the textual editor the ability to record minute decisions
at the location in the text where they are made. In the job-defining
role as *editor* an editor must decide about this or that variant, mark of
punctuation etc, but without markup and the computing which goes with it
there is no way of recording decisions at the minute level of detail at
which they are made. With it these decisions can be recorded. (Textual
editors who know better please contradict.)

Another colleague, whose passion is ancient inscriptions, pointed out to
me some time ago that markup is similarly well-suited to epigraphy --
because of what she called the "reporting function" of that discipline.
The epigrapher witnesses and publishes surviving inscriptional evidence
while it still exists, before someone defaces it, carts it away and
sells it on the black market, weather wears it away or whatever. The
epigrapher provides material for the benefit of other scholars. Markup
and associated technologies are a godsend.

For the literary scholar, however, interpretation is a different matter,
requiring a very different disciplinary style and making very different
demands on the technologies we devise to assist it. My 10 or so years
devoted to markup (pre-TEI) taught me that it is not in principle
well-suited to the literary critic's interpretative practices. Jerome
McGann has made this point forcibly numerous times.

To a publisher text as an "ordered hierarchy of content objects" makes
perfect sense. To a literary critic it is laughable nonsense. To a
philosopher it is an interesting hypothesis, I would suppose, whose
implications need working out. To an historian it is evidence of people
thinking in a particular way at a particular time, raising the question
of how they came to think thus.

In the digital humanities we are sometimes overly impressed by the
portability of our methods and tools. We fail to see that when a method
successful in one discipline is ported into another the game it is intended
to play is different. The criteria which it must meet and the meaning of the
terms in which scholars think are different. Just as platform-independent
informational text cannot be known except by means of some platform or other
(the term itself is wrong), computing is meaningless to the scholar unless
manifested within the basic disciplinary context within which he or she is
operating. Crossing the boundary of an epistemic culture successfully
involves a complex blend of learning and teaching in what Peter Galison has
usefully called a "trading zone" -- for which see Michael E. Gorman, ed.,
Trading Zones and Interactional Expertise: Creating New Kinds of
Collaboration (MIT Press, 2010).

I think we still have a great deal to learn by studying and honouring what 
scholars in various disciplines do.

Comments?

Yours,
WM
--
Willard McCarty, FRAI / Professor of Humanities Computing & Director of
the Doctoral Programme, Department of Digital Humanities, King's College
London; Professor, School of Computing, Engineering and Mathematics,
University of Western Sydney; Editor, Interdisciplinary Science Reviews
(www.isr-journal.org); Editor, Humanist
(www.digitalhumanities.org/humanist/); www.mccarty.org.uk/





More information about the Humanist mailing list