[Humanist] 28.36 when the model becomes the object of study

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Tue May 20 01:09:18 CEST 2014


                  Humanist Discussion Group, Vol. 28, No. 36.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (39)
        Subject: when the model becomes the sole object of study

  [2]   From:    Desmond Schmidt <desmond.allan.schmidt at gmail.com>         (74)
        Subject: Re:  28.35 when the model becomes the object of study

  [3]   From:    "Jan Rybicki" <jkrybicki at gmail.com>                        (4)
        Subject: RE:  28.35 when the model becomes the object of study


--[1]------------------------------------------------------------------------
        Date: Mon, 19 May 2014 06:33:15 +1000
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: when the model becomes the sole object of study


Responses to my question about studying models have, if I am 
understanding them aright, usefully illumine a blurry area I was not 
looking for but can certainly use. Yes, when one builds a model there 
are stretches of time when your focus is on that model. These temporal 
stretches lengthen. Conventional wisdom on modelling insists one must 
never forget you're dealing with a model, not with reality (conceived as 
something other than a model, unconstructed, "out there" to be studied, 
solid enough to bruise your toe if you kick it or, as Hacking says, 
real enough to be sprayed).

But I was asking about any situations in which the constructed model 
takes over more or less completely from the modelled object -- no 
cycling back to the original to check things out, no comparing to see 
how well the modelling has approximated reality, since the model has 
become the studied reality. There are certainly situations in the social 
sciences in which details of that which is modelled cannot be observed 
directly. Note, however, that a friend of mine at the Santa Fe Institute, 
a mathematician who studies complex systems, says that these days 
"we use both" such simulations and analytic methods for which the 
object of study is in central vision.

(Let us beware of semantic spread and so loss of power of meaning, 
when "model" becomes anything at all, a concept, an argument etc. For 
our purposes, esp for mine here, let's confine "model" to something 
made of software or some other lego-, Mechano- or tinkertoy-like 
components: something that runs and can be manipulated.)

Let us say that hard work has won you total confidence in your model. 
Everything that can be known about the object is in the model. You're 
confident of that. Let's say you're right. But then on the basis of this 
model you can make inferences otherwise impossible. These 
inferences, let us say, check out, make sense, hold up. They become 
what one knows.

Any examples of that happening in the humanities and interpretative 
social sciences? There are loads of examples in the natural sciences.

I don't think this is an hypothesis about alternative realities, 
parallel worlds and the like. But it would seem very like counterfactual 
history, for example. It would seem to be about an assimilation of 
the computational that enlarges not contracts our intellectual life.

More?

Yours,
WM
-- 
Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Research Group in Digital
Humanities, University of Western Sydney


--[2]------------------------------------------------------------------------
        Date: Mon, 19 May 2014 07:10:35 +1000
        From: Desmond Schmidt <desmond.allan.schmidt at gmail.com>
        Subject: Re:  28.35 when the model becomes the object of study
        In-Reply-To: <20140518201119.66A636343 at digitalhumanities.org>


Hi Geoffrey,

I realise that everyone has a different definition of a generic word like
'model' but going back to Willard's original question:

>Does anyone know of computationally inflected research in the humanities
>or social sciences in which the model has itself become the object of
>study because from that model may be inferred knowledge about the
>original artefact that cannot be obtained directly from it?

I understood this in a slightly different way to how you seem to be taking
it. I don't regard the model as a 'surrogate' but as a template for the
surrogate; as a class, not an instance. 'Modelling' is often talked about
in the context of finding out how to mark up a text, to determine which
codes will be needed to express what the editor wishes to record. But a
model is not the actual codes per se. Studying a surrogate is like studying
an actual artefact through the filter of its representation; studying a
model is trying to understand what makes a text tick. Is this, perhaps, a
distinction worth making?

Desmond Schmidt
Queensland University of Technology

On Mon, May 19, 2014 at 6:11 AM, Humanist Discussion Group <
willard.mccarty at mccarty.org.uk> wrote:

>                   Humanist Discussion Group, Vol. 28, No. 35.
>             Department of Digital Humanities, King's College London
>                        www.digitalhumanities.org/humanist
>                 Submit to: humanist at lists.digitalhumanities.org
>
>
>
>         Date: Sun, 18 May 2014 12:22:06 -0600
>         From: Geoffrey Rockwell <grockwel at ualberta.ca>
>         Subject: Studying models
>
> Dear Willard,
>
> Regarding your question about the study of models - if you consider
> metadata as a model then Franco Moretti and Matt Jockers have been studying
> models of literature. Here is Moretti in “Network Theory, Plot Analysis”
> (Distant Reading, 2013) about studying metadata, “once you make a network
> of a play, you stop working on the play proper, and work on a model
> instead. You reduce the text to characters and interactions, abstract them
> from everything else.” Matt Jockers has a chapter in _Macroanalysis_ that
> explicitly discusses the opportunities and dangers of the analysis of
> metadata (Chapter 5.)
>
> Going back further we should consider John Smith’s theory of how computers
> can be used to study literature in "Computer Criticism." STYLE XII.4
> (1978): 326-56. My read of this is that he proposes that we can use
> algorithms or manual encoding to create layers that represent structures in
> the text. These layers would be like the layer of imagery that he extracts
> and discusses in "Image and Imagery in Joyce's Portrait: A
> Computer-Assisted Analysis." Directions in Literary Criticism: Contemporary
> Approaches to Literature. Eds. Weintraub, Stanley and Philip Young.
> University Park, PA: The Pennsylvania State University Press, 1973. 220-27.
> Smith doesn’t call these models, but I think they are a form of surrogate
> that can be studied and compared to other surrogates. In “Computer
> Criticism” he shows some visualizations of extracted features that show
> some of the innovative ways (in the 1970s) he was modelling texts.
>
> For that matter, if we go back to T. C. Mendenhall’s article in _Science_
> “The Characteristic Curves of Composition” (1887), he “proposed to analyze
> a composition by forming what may be called a ‘word spectrum,’ or
> ‘characteristic curve,’ which shall be a graphic representation of an
> arrangement of words according to their length and to the relative
> frequency of their occurrence." (p. 238) These manually computed curves
> could then be compared as a way of comparing models of the writing style of
> authors.
>
> Perhaps I am stretching what you consider a model, but I believe there is
> a long tradition of using a combination of manual and automatic methods to
> take the measure of a text so as to produce a surrogate that can be
> studied, manipulated, visualized, and compared.
>
> Yours,
>
> Geoffrey Rockwell


--[3]------------------------------------------------------------------------
        Date: Mon, 19 May 2014 07:16:39 +0200
        From: "Jan Rybicki" <jkrybicki at gmail.com>
        Subject: RE:  28.35 when the model becomes the object of study
        In-Reply-To: <20140518201119.66A636343 at digitalhumanities.org>


Dear Willard,

I agree with Geoffrey (who wouldn't?). The one obvious model that we have in literary studies is what we know (or think we know) about literature and literary history from the tons of paper covered with traditional interpretation, categorization, periodization... Matt Jockers says it very clearly in his "Macroanalysis" when he calls this traditional knowledge "anecdotal" in the sense that our view of, say, the Victorian novel has so far been based on several dozen "canonical" books by "canonical" writers rather than on the whole body of novel-writing in that time (thousands of titles). I don't think he's making fun of extant scholarship; what he wants is a reevaluation of the model now that we have the tools (or we're starting to believe we might have the tools) to take a broader (if more distant) look. BTW, it does not cease to amaze me how well traditional literary periodization seems to do in tests of its MODEL based on such "insignificant" features as most frequent word frequencies.

Best,
Jan Rybicki





More information about the Humanist mailing list