[Humanist] 28.35 when the model becomes the object of study

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Sun May 18 22:11:19 CEST 2014


                  Humanist Discussion Group, Vol. 28, No. 35.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Sun, 18 May 2014 12:22:06 -0600
        From: Geoffrey Rockwell <grockwel at ualberta.ca>
        Subject: Studying models

Dear Willard,

Regarding your question about the study of models - if you consider metadata as a model then Franco Moretti and Matt Jockers have been studying models of literature. Here is Moretti in “Network Theory, Plot Analysis” (Distant Reading, 2013) about studying metadata, “once you make a network of a play, you stop working on the play proper, and work on a model instead. You reduce the text to characters and interactions, abstract them from everything else.” Matt Jockers has a chapter in _Macroanalysis_ that explicitly discusses the opportunities and dangers of the analysis of metadata (Chapter 5.)

Going back further we should consider John Smith’s theory of how computers can be used to study literature in "Computer Criticism." STYLE XII.4 (1978): 326-56. My read of this is that he proposes that we can use algorithms or manual encoding to create layers that represent structures in the text. These layers would be like the layer of imagery that he extracts and discusses in "Image and Imagery in Joyce's Portrait: A Computer-Assisted Analysis." Directions in Literary Criticism: Contemporary Approaches to Literature. Eds. Weintraub, Stanley and Philip Young. University Park, PA: The Pennsylvania State University Press, 1973. 220-27. Smith doesn’t call these models, but I think they are a form of surrogate that can be studied and compared to other surrogates. In “Computer Criticism” he shows some visualizations of extracted features that show some of the innovative ways (in the 1970s) he was modelling texts.

For that matter, if we go back to T. C. Mendenhall’s article in _Science_ “The Characteristic Curves of Composition” (1887), he “proposed to analyze a composition by forming what may be called a ‘word spectrum,’ or ‘characteristic curve,’ which shall be a graphic representation of an arrangement of words according to their length and to the relative frequency of their occurrence." (p. 238) These manually computed curves could then be compared as a way of comparing models of the writing style of authors. 

Perhaps I am stretching what you consider a model, but I believe there is a long tradition of using a combination of manual and automatic methods to take the measure of a text so as to produce a surrogate that can be studied, manipulated, visualized, and compared.

Yours,

Geoffrey Rockwell





More information about the Humanist mailing list