by EILEEN JOY
As some of you may know already, postmedieval is about halfway through a 2-month open "crowd review" of its forthcoming special issue on Becoming-Media, co-edited by Jen Boyle and Martin Foys, and you can see what has been happening with that, and also participate yourself, here:
Crowd Review: Becoming-Media Issue
In all honesty [and yes, I know I am an impartial judge], I have been thrilled with how this crowd review has been progressing thus far--if you follow the link above, you can see for yourself that, in just under four weeks, we have had a fairly robust response, with really thoughtful and expansive comments from a wide variety of commentators [the issue's editors, junior faculty, more senior faculty, graduate students, and one imagines, some independent scholars]. Of course, we have to reflect that the essays were solicited in advance by the issue's two editors and received some expert review by them before emerging into the crowd review context, and some of the essays may have received comments in other contexts prior to being received by Jen and Martin [I know, for example, that Whitney Trettien blogged and tweeted portions of her essay in the past and also maintains a public wiki where she keeps all of her notes, annotations, and bibliography relative to her various writing projects]. I belabor this point because it is not the mission of this crowd review to ask potential reviewers to assess whether or not these essays are worth publishing or not. To a certain extent, that has already been decided by the issue's editors, although, just as with an edited volume of essays, all of the authors involved understand that the crowd review process does serve as a form of "external" review of their work for this special issue of the journal, and I assume they will revise accordingly with Jen and Martin's expert guidance [but also with their own sense of which comments best serve the purposes of their separate essay projects: in other words, the authors still maintain sole control of the overall direction and content of their individual essays]. But something also really different and importantly valuable is going on here, and it is worth reflecting upon further.
BUT: before reflecting on what I think is so valuable about this process, I would like to also say that, as I think everyone knows, we have certainly had, in the last couple of years, a LOT of online conversations and debates over the traditional peer review process within the humanities [anonymous, double-blind, with "expert" reviewers chosen in advance by journal and book editors and publishers, and so on], and one of the things that has really struck me in these debates is how often they devolve into an either/or situation: either we should stick with traditional peer review [or at least some of its components] for x, x, and x reasons, or we should jettison traditional peer review altogether in favor of something completely new and different because the traditional system is supposedly hopelessly "broken." And then we witness endless anecdotal evidence proffered by both sides that either demonstrates:
1. traditional peer review still works well [and of course it has worked well for many, many people as well as the humanities more broadly: who would argue with this? it's not as if the system itself is somehow insidious at the core, although it can occasionally go awry, but as Cathy Davidson has also recently shared, anonymous review can actually protect against the insidious prejudices of the old-guy-closed club-networks of the past: Bonnie Wheeler makes the same point in her recent article "The Ontology of the Scholarly Journal and the Place of Peer Review," Journal of Scholarly Publishing (April 2011): 307-323]; OR:The bottom line for me is that we try, as much as possible, not to lapse into the "if it ain't broke, don't fix it" arguments [after all, everything can be improved, everything!], and that we follow Bonnie Wheeler's suggestion that peer review, in some sort of "revised format," might now "provide a transparent activity that reflects one's desire as well as obligation to push the limits of vivid intellectual work in one's field" ("The Place of Peer Review," p. 316). What this means, to me [and I think/hope to Bonnie as well], is that we recognize better that peer review [and more largely, humanistic scholarship] is a shared and collaborative practice that requires the mutual participation of authors and reviewers [authors have to both expect thorough and ethical review of their work, but also agree to participate as reviewers themselves: this is mainly a gift economy, after all, and we need to recognize that better], or else the humanities itself could be in some trouble.
2. peer review, traditionally constructed, can actually be harmful to individual careers and the advancement of scholarly knowledge [and of course everyone has received at least one lousy and misguided reader's report, and yes, some overly conservative and/or conformist editors and reviewers may be acting as disciplinary gatekeepers, but that does not mean the whole enterprise of traditional peer review can only result in unethical reviewing practices, although, yes, they happen: there will always be little intellectual mafias and they will continue their bullying whether in open or closed systems, but at the same time, there are many advanced scholars who have given selflessly of themselves in offering fair and deeply engaged, yet uncredited, reviews].
Bonnie herself draws attention in her article to what she sees as the admirable idea of a kind of e-commune, or "scholarly commons," proposed by Kathleen Fitzpatrick, in which everyone participates in the process of producing and evaluating scholarly work, and she also highlights Fitzpatrick's very sophisticated proposals for open peer review, which I might add, inspired postmedieval's crowd review, especially Fitzpatrick's expansive re-defining of what we mean by "peer" [to include, not just the specialized experts of one's narrow sub-fields but also members of the more broad intellectual community, both within and outside the university proper] and also her idea that more open processes of peer review might help us to better model Bill Readings' University of Thought, where "Thought does not function as an answer but as a question" [The University in Ruins, p. 159].
You can pile on the evidence in both directions [keep peer review the way it is or trade it in for something completely different] and what it adds up to for me is very simple: we need peer review, but it can be improved. For me, the virtues of the crowd review that postmedieval is engaging in right now are the following:
- The process is not just open and transparent [thereby ensuring, I really believe, better behavior on the part of reviewers who are, in a sense, performing their function in a public space, or "commons"] but is, even more importantly, collaborative: authors and reviewers speak directly to each other and can even converse back and forth on specific points raised in the review comments; one cannot stress enough that this is a mutually beneficial exchange, one in which both the author and the reviewer work toward enlarging each other's domain of thought and expertise.
- Instead of receiving just one or two sets of formal external review comments, which typically take two or more months for an author to receive, and with little bargaining room as regards accepting or rejecting these comments [which, in traditional peer review, are literally the only mechanism by which anything gets published at all], the crowd review begins with the premise that the work being reviewed already has a legitimate place at the table of published-if-still-in-progress scholarship [the crowd review itself is a form of publication] and the author receives multiple sets of comments, often very quickly, which she can then sift through for the most meaningful and helpful criticisms and/or suggestions for further research and thought. The crowd review, in other words, models scholarship as a richly co-productive and inter-subjective process, not just an end product that supposedly leapt out of one person's mind.
- Because the crowd review throws its net as widely as possible, in terms of individual reviewers but also fields and disciplines, it can involve reviewers from outside one's discipline, which actually helps all of us to be better communicators of our expertise and subject areas to a wider audience, both within and outside of the university proper. It helps us to make the case that we do "in here" [within the university, within academic journals, within scholarly books, within conferences, and the like] has something to say to the larger, public intellectual community, the members of which are always more varied and dispersed than we often imagine.
- The crowd review does not distinguish between nor hierarchize specialist versus non-specialist comments, faculty versus graduate student comments, and so on. The crowd review, therefore, models a learning process in which you never know where your best ideas [or advice for revision] might come from. Everyone has something to teach someone else. Yes, some "expert" reviewers might have access to certain forms of knowledge that are "hard-won" and not readily known to everyone else [and potentially very helpful to the author who may be wishing to succeed within the ambit of a certain specialized audience], but the bottom line is that the crowd review models a domain of knowledge/learning in which "rank" or "location" within the academy is beside the point.
- The crowd review unfolds and proceeds in a non-traditional space, the interwebs, where those who are not participating as reviewers can at least look in and see how the process works and learn from that; in short, the crowd review offers a model of processural, collaborative scholarship which everyone, vocally or silently present, can gain something from, even if it is just to learn how to professionally critique someone else's work.
- The crowd review makes visible what has always been true about the intellectual and scholarly life, but which is often only quietly articulated in the notes of acknowledgment in articles and books: we think and work together; our brains are already crowd-sourced, so why not make that fact more tangible?
the re-forming tendencies of “webbies” (webby nodes and interstices; their dispersal and distributedness) call forth formal events as “lines of connectivity” (past, present, and future) more so than as singular “time[s] of being.” In this sense, the crowd review for this issue is intricately inter-woven with the question of emergence. Rather than expending energies dictating structures that reassuringly mimic the features of traditional blind peer review, we offer an open “webby” in-time editorial process that we hope will lead to some interesting reflections on “webbies” as a historical, scholarly, editorial, intellectual, and social emergence.Emergence. What I see as the real hope of this crowd review [which we could never duplicate with each issue of postmedieval we publish as it is so demanding, time-wise, of so many people, and this sort of gift-labor is not an endless supply, a point Bonnie worries over in her article on scholarly publishing and peer review, cited above] is that it will help all of us to see that our scholarship is emerging all the time [and not in a linear or chronological fashion, either]: it can never really be only an end-product [the article or the book or database or edition or whatever]. We could never really locate the beginnings of our thought and intellectual projects, just as we could never locate their end(s). Instead of thinking of our c.v.'s as documents that reflect a list of publications that mark the discrete stages of our careers, we might reflect that the crowd review itself encourages us to remember that no idea is ever settled, and no article or piece of scholarly work, is ever "finished" [where did it even really begin?], and more importantly, we do not work alone, in solitary cells [although sometimes our studies and offices feel that way], in opposition to each other [hoping to outdo each other, race each other to some finish line, trump each other's arguments/reasoning, "scoop" each other on new methods and texts, etc.] but rather collectively bring ideas to the surface of a shared ground and light.
This does not mean we cannot be tough critics of each other's work--of course we can--but oftentimes, in my experience, calls for "tough" criticism are often accompanied by any number of pugilistic metaphors. If we can set aside the [clearly fallacious yet often supported] idea that the university serves as a site of hierarchizing knowledges [that kind of "duke it out" with each other] and embrace instead the idea that the university's mission should be to seek to democratize and enlarge what it is possible to think, then I think we move closer to the heart of what we might call a heterotopic multiversity, one that might attend more deeply to the question of space, which is almost more critical than time when it comes to our personal and scholarly lives. What room do we have to think [to live, also], and how can we multiply and re-dimensionalize and extend the spaces within which it becomes, for the largest number of persons possible, more possible to think, and to work, and to meaningfully communicate one's ideas, to be heard, and to hear in return? This is a question of personal freedom and creativity [and thus also personal happiness, and personal thriving], but also of care, of how we might work harder to care, not just for own work [and whether it might "succeed" or "fail" according to the traditional benchmarks for determining such matters], but for the work of others whose subject matter and methodologies might even be unattractive [at first glance] to us. A more open process of peer review, then, won't be about what does or doesn't "make the mark" or whatever does or does not get published in whatever journal or book [traditional print publication might almost be beside the point, although I personally am not giving up on the print journal and print book, for a variety of reasons, and I'm interested in pursuing all sorts of "business"-minded models relative to this]; rather, it would be an opportunity to build together a larger, more expansive, and more hospitable "commons" for a more baroquely appointed house of thought, as well as a more open, thriving humanities. And who knows where that might lead?
One thing I should have added to my list of things I have found valuable in postmedieval's crowd review experiment is FLEXIBILITY, which I think has also led to more people participating. Because of the "webby" format [essentially, a weblog interface], commentators can decide to review an entire essay OR they can just comment on a portion of an essay to which they feel they can add something of value, critical or otherwise. This is immensely helpful as regards getting people to contribute labor that is, as well know, mainly uncompensated.
ReplyDeleteAnd [yet again] one other thing I wish I had added to my list of things I have found valuable in postmedieval's crowd review experiment has to do with my bullet point about the diversity of reviewers. Under the aegis of traditional peer review, someone's article will most likely go out to 2 expert reviewers chosen by the editor of the journal [or book] and NOT in consultation with the author. Sometimes these expert readers are culled from very small groups of scholars who will sometimes even admit they all know each other pretty well and, regardless of the double-blind status of the review, they kind of know whose work they are reading. Some reviewers, because they are leading experts in very small sub-fields are also, maybe, overburdened with reviewing requests. An author may know that there are certain reviewers who will hate their work, but they're not allowed to say so or to recommend against those reviewers [for the most part; I know there are exceptions]. At the same time, a lot of us are asked to review articles for which we are not really experts at all. So, I'll get an article that pertains to medieval literature [I'm a medievalist ... okay] but it's about Margery Kempe, and my specialty is Old English poetry, and sure, I also specialize in so-called "presentist" medieval studies, so I get sent things that pertain to "encounters" between past and present, etc. Journal editors are often hard pressed, I think, to always locate the perfect reviewers for every single article and I imagine they sometimes go a little bit afield to find reviewers. This can be a bad thing [and we can always say "no, thanks," of course, but I rarely do, as I believe reviewing is an important service obligation], BUT it can also be kind of good. The article on Margery Kempe that I agreed to review ended up winning second place in an important article contest in the field. The author asked permission to contact me and thanked me for my comments. I guess what I'm trying to say here is that an open crowd review opens up the horizons of peer review, considering as one sort of "expert," say, someone who works in your temporal field, but maybe not on your favorite texts, and who can nevertheless offer some invaluable advice for revisions. In the case of the Margery Kempe article, I hope the author also had a Kempe expert on board, but the bottom line point here is: we need a DIVERSITY of perspectives on our scholarship and not just from the most narrowly/overly specialized experts. Probably the best-case scenario would be one where, as the crowd review experiment is already demonstrating, I think, an author receives advice from the narrowly-defined specialist most intimately familiar with the author's subject matter and methodologies, as well as from other scholars who work within the same time period [but on different subjects], from grad. students who are typically knee-deep in all sorts of reading lists and developing specialized projects of their own, and even from scholars working in other time periods and fields who might share certain theoretical/methodological concerns with the author and can give more meta-theoretical advice. This sort of diversity is not possible in traditional peer review.
ReplyDeleteI think there is so much to be said about the "metaphor" in these processes. Eileen invokes this a few times in some very interesting ways, and I so look foward to expanding on this for the upcoming forum on peer review. Let me also say that I think the invocation of the indeterminate beginnings and endings of a scholarly project also get to some really significant aspects of this experiment. Authors have shared with me their slight trepidation over submitting in-process drafts to a more public forum, only to be followed by delight at the energy and animation that comes with engaged responses. What does it mean, personally and professionally, when we see something come to "print" that we have publically had a voice in early on in the process of its emergence? I also want to comment briefly on the appearance of "webby" in the vision statement.
ReplyDeleteIn an email exchange awhile back, Eileen mentioned "crowd review" as a more apt descriptor of our process than open peer review. I found this idea of the "crowd" to be a vital notion, at once connected to a recent PMLA article I had just read by David Theo Goldberg that re-considered Foucault's categories of ancients and moderns, as well as a forthcoming critique of open peer review that described such experiments as opening the doors to a mob of un-informed public opinion. I look forward to a more extensive consideration of what we might mean by "crowds," "mobs," and "peers" in these contexts. In the meantime, it has been exciting to see and feel a meme like "crowd" morph and move in real time, under the emergent pressures of an email exchange, with feedback from a colleague, via proliferation as a vision statement, referenced in indirect ways from skeptical authors who revision the affects of the process, and amid the informed commentary of those not yet able to speak beyond the crowd professionally.
Trying to share the hybrid love:
ReplyDeletehttp://www.newappsblog.com/2011/08/on-folklore-.html
-dmf