[Humanist] 27.403 peer review

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Fri Oct 4 10:37:43 CEST 2013

                 Humanist Discussion Group, Vol. 27, No. 403.
            Department of Digital Humanities, King's College London
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Adam Crymble <adam.crymble at gmail.com>                     (77)
        Subject: Re:  27.395 peer review

  [2]   From:    "Nowviskie, Bethany (bpn2f)"                              (73)
                <bpn2f at eservices.virginia.edu>
        Subject: Re:  27.395 peer review

  [3]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (48)
        Subject: realities of peer-reviewing and the prejudice against edited

        Date: Thu, 3 Oct 2013 08:30:39 +0100
        From: Adam Crymble <adam.crymble at gmail.com>
        Subject: Re:  27.395 peer review
        In-Reply-To: <20131003055220.6BCA639D4 at digitalhumanities.org>

Regarding peer review: The Programming Historian 2 uses a semi-transparent
review system. The reviewers know who the author is, and once the piece is
published, the author finds out who the reviewers were. We do that mostly
because we believe reviewers should be acknowledged for their contribution
in the scholarly process.

But we're a bit different in the sense that we believe the role of the
reviewer is to help the author improve the piece to the point where it will
be accepted. They act as guides and mentors in a system designed to improve
work rather than gate-keep it. We find that makes for a very collegial
atmosphere in which we hope both reviewers and authors feel a part of the
community of scholars helping to build resources for others to use and
learn from. If someone was acting vindictively, my role as the editor is to
remove them from that community, and I'd be quick to do so.

Adam Crymble
Editor, Programming Historian 2
acrymbl at uwo.ca

On Thu, Oct 3, 2013 at 6:52 AM, Humanist Discussion Group <
willard.mccarty at mccarty.org.uk> wrote:

>                  Humanist Discussion Group, Vol. 27, No. 395.
>             Department of Digital Humanities, King's College London
>                        www.digitalhumanities.org/humanist
>                 Submit to: humanist at lists.digitalhumanities.org
>         Date: Wed, 02 Oct 2013 11:06:54 -0400
>         From: "David L. Hoover" <david.hoover at nyu.edu>
>         Subject: Re:  27.391 peer review
>         In-Reply-To: <20131002062659.7AFE13A14 at digitalhumanities.org>
> Dear James, Barbara, and all,
> The single blind process we have now acknowledges the fact that the
> number of submissions has probably reached the point where we can't
> check to assure that each one has been made as anonymous as possible. It
> also seems to me to be based on the view that blind reviews are more
> likely to be candid than ones where the author can see who wrote the
> review. It has the negative characteristic that authors with high status
> or a penchant for revenge may have abstracts accepted that are of
> significantly lower quality than those by unknown authors. And, as
> Barbara suggests, blind reviews allow for the possibility of negative
> bias and allow the reviewer to hide behind the anonymity of the process.
> A totally open process does, as Barbara says, require that reviewers own
> their own views, but it has the corresponding negative characteristic
> that reviewers may be intimidated by some authors, and may, in general,
> tend to write bland, mid-range reviews that make the program committee's
> job much more difficult. Again, authors with strong reputations may have
> relatively weak abstracts accepted. I have sometimes given an author
> whose talks are always of the highest quality a set of rankings that are
> not completely justified by the quality of the abstract alone, but I
> think this is, in fact, a justifiable practice. Further, it assures that
> those who want to reward their friends and punish their enemies can do
> so, as long as they do it openly (though it won't necessarily be obvious
> that they are doing it).
> The process last year and this, by allowing authors to respond to the
> reviews, seems to respond to some of the weaknesses of the single blind
> review, by allowing the author to counter an unfair or biased
> assessment. Allowing reviewers to opt out of reviewing some papers also
> seems like a positive step, and it at least assures that reviewers don't
> work on abstracts that they have no interest in. I am not as sure about
> the benefits of allowing reviewers to bid on papers, as this is also
> obviously open to abuse and collusion (If you give my paper a great
> review, I'll do the same for yours; oh, that jerk has an abstract this
> year, I'll fix her; he did me a good turn, so I'm going to give his
> crappy abstract a high score).
> All the processes have competing weaknesses and strengths. In a perfect
> world, I would prefer a double-blind process with an opportunity for the
> author to respond, but I realize that is probably not be possible. This
> issue seems like one that should be taken up by the entire community and
> discussed at AGM's and in the Conference Coordinating Committee.
> David Hoover

        Date: Thu, 3 Oct 2013 23:33:12 +0000
        From: "Nowviskie, Bethany (bpn2f)" <bpn2f at eservices.virginia.edu>
        Subject: Re:  27.395 peer review
        In-Reply-To: <20131003055220.6BCA639D4 at digitalhumanities.org>

Further on the topic of peer review for the DH conference:

I think David offers an excellent overview of the options and concerns that have faced us in past years, and which have been the subject of much discussion among Program Committee members, and in the Steering and Conference Coordinating committees of ADHO.  

Those conversations continue, and the kind of community feedback we see here -- as well as the feedback I received and passed on as last year's PC chair -- is most helpful!

I just want to add a quick note about the "bidding phase" David mentions, which we experimented with for DH 2013 and which I understand will be an option this year as well.

First, it is very unfortunately named in the ConfTool system.  It would be more accurate to say that this phase is about indicating conflicts of interest, on the one hand -- some of which would not otherwise be evident from the author/reviewer metadata in the system -- and, on the other hand, indicating more nuanced areas of expertise than may be captured through our subject taxonomy. People participating in the "bidding phase" should not really be indicating the papers they WANT to review, so much as those they feel especially qualified or unqualified to review. 

It is also important to understand that the "bids" are not invisibly and unthinkingly acted on. It is the responsibility of the PC chair to evaluate information gained through that process and use both her own judgment and the assistive interface of ConfTool, which helps take several other factors into account, to make review assignments. 

You may be interested to know that one side effect of last year's attempt to engage reviewers more deeply in the process from start to finish was a halving of the number of late and missing reviews, and a marked (not to say miraculous) decrease in the number of flippant, too-brief, or downright rude reviews received -- all banes of Program Committees past.

No peer review systems are perfect systems! But I believe they are perfectible, and it's important that we keep talking about and refining our processes each year.

Thanks to James for opening this great discussion.  -- Bethany

Bethany Nowviskie, MA Ed., Ph.D.
nowviskie.org | scholarslab.org | clir.org | ach.org | library.virginia.edu

Director of Digital Research & Scholarship, University of Virginia Library
President of the Association for Computers & the Humanities
Special Advisor to the UVa Provost and CLIR Distinguished Presidential Fellow 

        Date: Fri, 04 Oct 2013 09:20:22 +0100
        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
        Subject: realities of peer-reviewing and the prejudice against edited collections
        In-Reply-To: <20131003055220.6BCA639D4 at digitalhumanities.org>

As editor of a journal that normally does a double-blind review on its 
articles, I've been in many situations that required interventions and 
variations of various kinds. One such required me to write the 
introductory part of a brilliant, imaginative article in order to broker 
a delicate agreement between a prominent but unreasonably severe 
reviewer and an author so nervous about the article that he threatened 
to withdraw it. Commissioned articles from very prominent people I've 
handled by asking them to name two reviewers from whom they would most 
like to hear. That's worked every time, and very well. My own 
experiences as an author of reviewed articles has been exceedingly 
positive, except for the first time, when a jealous senior professor, 
whose identity was clear, did a hatchet job on me. Devastating at the 
time, but now it's only an amusing anecdote. I've heard of truly 
horrendous cases. But mostly I regard the mechanism as good for all.

We do need to recognize that the need for such reviewing varies with the 
discipline. There are some areas of physics, I have been told, that 
don't need it at all, simply because no one would work in the area 
except someone who knew what he or she was doing. The Sokol affair 
taught us that some disciplines are wide open to trickery whatever the 
review process -- because, it seems, anything goes. Digital humanities 
presents some problems of its own, or I should say, opportunities to 
rethink the process. But others have written on this, so I won't.

What concerns me deeply is the strong prejudice against contributions to 
edited collections. It is said far and wide in the UK at least that such 
contributions are automatically suspect, and some say should be, or are, 
downgraded automatically for assessment purposes, because they aren't as 
rigorously peer-reviewed as journal articles. Picture this: I and my 
mates get together and in a friendly way, without much internal 
criticism, write up some chapters and make a book, which the publisher, 
not knowing any better, puts into print. I suppose that does happen, but 
the suspicion, I think, is largely unwarranted. I've gathered with my 
mates and put together a volume, and then gone on to be quite demanding 
that poorly written contributions be revised, and revised again. As a 
scholar I depend all the time on edited collections and go to such 
preferentially to find magisterial surveys of activities in disciplines 
I need to find out about. As an author I contribute to such collections, 
our Research Excellence Framework be damned.

I would like to know who, without widespread consultation, has quietly 
decided in effect to make a whole type of publication venue impractical 
for those who need to be very concerned with their rating for the REF or 
other such exercises. This we need most strongly to oppose.


Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
Humanities, King's College London, and Research Group in Digital
Humanities, University of Western Sydney

More information about the Humanist mailing list