Marquette University Digital Projects Librarian, and my wife, Ann Hanlon, recently sent me a link to this story from the August 24, 2010 New York Times. “See what you think,” she suggested.
Here’s what I think.
In “Scholars Test Web Alternative to Peer Review,” Patricia Cohen reports on emergent efforts to bring academic publishing into the digital age: 1) to take advantage of the kind of crowd-sourcing through which other kinds of online knowledge databases (e.g., Wikipedia), archives (e.g., Flickr), and networks (e.g., Facebook) have been built, and then 2) to rethink the nature of what kind of intellectual contributions should fall under heading of academic work that could be “counted” for things like credit, promotion, and tenure.
The immediate context of Cohen’s article is a recent attempt by the Shakespeare Quarterly, in partnership with MediaCommons, to open up their peer review process to public input:
Mixing traditional and new methods, the journal posted online four essays not yet accepted for publication, and a core group of experts — what Ms. Rowe called “our crowd sourcing” — were invited to post their signed comments on the Web site MediaCommons, a scholarly digital network. Others could add their thoughts as well, after registering with their own names. In the end 41 people made more than 350 comments, many of which elicited responses from the authors. The revised essays were then reviewed by the quarterly’s editors, who made the final decision to include them in the printed journal, due out Sept. 17.
Hey! That’s today! And though the “Shakepeare and New Media” issue doesn’t yet seem to be out (the Quarterly’s home page says it “will be out soon”), you can find a description of the project and the process, and the articles and commentary themselves, here.
OK. So there’s a couple of things we could say about this experiment, and then more generally about the brave new world it would call into being. First, it’s worth remarking that from the standpoint of the authors themselves, those who submitted work for review, the results seems to have been pretty positive. Cohen writes:
The first question that Alan Galey, a junior faculty member at the University of Toronto, asked when deciding to participate in The Shakespeare Quarterly’s experiment was whether his essay would ultimately count toward tenure. “I went straight to the dean with it,” Mr. Galey said. (It would.)
Although initially cautious, Mr. Galey said he is now “entirely won over by the open peer review model.” The comments were more extensive and more insightful, he said, than he otherwise would have received on his essay, which discusses Shakespeare in the context of information theory.
And his experience suggests a real value to the open sourcing of commentary in terms of the “review” part of peer review: of helping a writer assess and revise their own work — in the same way that academic blogs and other currently non-peer-reviewed work (and currently inadmissible for promotion review work) does. Having this aspect of the review process public would seem to work to make it a process and, not incidentally, to make academic work more broadly public: a way, that is, a) to actually find more readers for academic work and find them much more immediately than is usually the case with peer-review work, b) to deal with work on a fact checking and argument testing level, and c) to get useful feedback for revising the work on its way to publication and for thinking about directions one might take in future work.
These are all things that traditional blind peer review simply doesn’t do very well. And as the article points out, a version of this kind of pre-publication access and feedback already works, successfully, in some academic fields, such as economics. Kathleen Fitzpatrick of MediaCommons has similarly drafted and revised work, including chapters of her 2009 book, Planned Obsolescence, in public view through online means. (And, on a more radical scale, Lawrence Lessig put up his 2000 book, Code and Other Laws of Cyberspace, on a wiki platform and let readers discuss and revise it. After doing a final edit himself, Lessig published the result in 2006 as Code: Version 2.0.)
But the experiment seems less convincing to me in terms of the “peer” dimension of peer review — the dimension that is at the heart of academic freedom as Kant imagined it. And it’s important to remember that this imagining of academic freedom is about a “freedom from,” not a “freedom to.” Not the freedom to say and write whatever the hell you want, but the freedom to say and write that which you can convince your peers is worth saying and writing, free from external pressures: from the Church, from the State, and (as we would need to add for the 21st Century), from the Market.
While Cohen’s article is right to point out that blind peer review as typically practiced can often lead to impact-muting delays, intellectual sloppiness, and all-around non-accountability and unprofessionalism, blind peer review still, nevertheless, does serve the purpose of taking some of the politics out of publishing (and thus credentializing) decisions. And, beyond the potentially distorting pressures of academics simply wanting to be popular (not entirely bad, I suppose), it’s pretty easy to imagine certain kinds of work put up for open commentary getting swamped and dragged by politicized and politically-motivated and tightly orchestrated commenting campaigns. It’s not that external comments can’t be smart, but that they inevitably are going to come with all kinds of motivations other than just the pursuit of knowledge.
But, all in all, though I’m still inclined to thing that it’s more important to make the end publication of peer-review research open access (i.e., free and accessible to anyone with an internet connection) than to open up the peer-review process itself, I think it’s an experiment worth continuing to think about.