by Joshua Piker, Editor of the William and Mary Quarterly
Those of you who have read my blog posts over the last five years know that I believe wholeheartedly in transparency in the publication process. I’ve blogged about manuscript submission numbers, the seasonal fluctuation of those numbers, the time a manuscript spends in peer review, rejection rates, and the value (as I see it) of the peer review process. I did so both to demystify the process of publishing a journal article and to advocate for the efficacy of a sustained, thorough, and far-reaching discussion between author, peer reviewers, and editor.
But transparency also serves the important goal of inclusivity and equity in scholarly work—including publishing. This is a key value for the OI, and for the journal.
Conversations about exposing bias in journal publishing often founder because of incomplete data—or, rather, because the data that is publicly available is incomplete. North American archaeologists, for example, have been studying gendered disparities in publications for a while now, but—as Dana Bardolph and Amber Vanderwarker put it in their 2016 Southeastern Archaeology article—“Data on submission and acceptance/rejection rates, which generally are not collected, retained, or made public by journal editors, are needed to test this issue.” Likewise, the editorial team of twelve women that has just been appointed to lead American Political Science Review notes, “Our first principle is editorial transparency. Our decision-making processes will aim to meet the highest standards of transparency with respect to key editorial issues. We will collect and make available data about our workflow as well as about the demographic composition of our reviewer pool, readership, and submitting and published authors.” Transparency is a necessary response to engrained hierarchies and longstanding patterns of differential exclusion and preferential access.
In recent years, the OI has been collecting and assessing data about our programs and publications. So, let me lay out our numbers for the William and Mary Quarterly in re: manuscript submission, peer review, and publication for 2018, with a specific focus on gender equity and inclusiveness. I highlight these topics because of the nature of the important issues raised by Lisa Wilson’s statement about Michael McGiffert’s sexual harassment of her.
In 2018, the journal received 117 manuscript submissions from 123 authors. Seven of those manuscripts were “desk accepted,” the designation we use for some recruited short forum pieces and conveners’ essays. Four of those manuscripts were authored by men and three by women. Thirty-two manuscripts were “desk rejected” and did not go out for peer review, five by women (16%) and twenty-seven (84%) by men. Seventy-eight manuscripts—a total of eighty-four authors—went out to peer review. Of those, forty-three of the authors were male (51%) and forty-one were female (49%). Of the coauthored manuscripts: two were female/male, two were male/male, and two were female/female. In total, 2018’s manuscript submissions were authored by forty-nine women (40%) and seventy-four men (60%).
It would be nice if I could end that paragraph by saying “… and we published X number of those manuscripts with a Y gender ratio among the authors,” but the nature of the publication process is such that the submission numbers for a given calendar year don’t map neatly onto what was published that year. Much of what we published in 2018, after all, was submitted in 2016 and 2017, and much of what was submitted in 2018 will be published in 2019 or 2020. In fact, there’s an article slated for publication in January 2020’s issue that was originally submitted in 2015. (The author took the revision process very seriously.) So, rather than simply counting what the journal published in 2018, let’s look at the Quarterly’s publication record over a range of years.
For the three years from January 2017 to the end of 2019—including the articles that will appear in the forthcoming October 2019 issue—the journal’s articles have been authored by thirty-seven women (47%) and forty-two men (53%).
As for peer review, I sent seventy-eight of 2018’s manuscripts out to readers. As you know if you read my posts, I aim to provide each author with five reader reports. In pursuit of that goal, I asked 544 scholars to serve as readers, almost seven per manuscript. (That number is a bit higher than I expected. My mental formula—“Ask six to get five”—needs recalibrating.) Of those scholars who I approached to read for the journal, 287 (53%) were women and 257 (47%) were men.
Of course, all of the above are just numbers, and flawed ones at that. After all, I’m relying on my sense of authors’ and readers’ gender identities, and I’m deploying a binary, either-or system of gender identification in a world that is finally starting to recognize fluidity and possibility. Moreover, even if the numbers that I’ve presented were 100% spot-on accurate, they wouldn’t provide The Answer to a set of challenges—to the journal and the OI, but also to the field more generally—that are not susceptible to quantification.
And yet. To the extent that numbers of this sort can help us evaluate the state of play within a field and at a given journal, they’re valuable. And to the extent that publishing numbers of this sort represents a journal’s commitment to transparency of process, that too is valuable.
Yes, way to go Josh. And, I assume, authors are not initially identified as to gender, institution, etcetc. It’s the only way. From these early data it looks like we are doing OK?
I analyzed my grades by gender at Michigan and at Montana, with similar results. Helena worked for Women in Science at Michigan, a tougher nut to crack. It’s a freer world for us all if we proceed this way.