This post is a repost of “Reviewer’s Choice” published in Inside Higher Ed.
“Greetings! You’ve been added to our journal’s Editorial System because we believe you would serve as an excellent reviewer of [Unexciting Title] manuscript…”
If you’re an established academic, you probably get these too. It feels like such emails are propagating. The peer review system may still be “the best we have” for academic quality assurance, but it is vulnerable to human overload, preferences, and even mood. A result can be low-effort, late, or unconstructive reviews, but first the editors must be lucky enough to find someone willing to do it. There should be a better way. Here’s an idea of how to rethink the reviewer allocation process.
The pressure on peer review
As the number of academic papers continues to grow, so do refereeing tasks. Scientists struggle to keep up with increasing demands to publish their own work while also accepting the thankless task of reviewing others’ work. In the wake, low-effort, AI-generated, and even plagiarized reviewer reports find fertile ground, feeding a vicious circle that slowly undermines the process. Peer review—the bedrock of scientific quality control—is under pressure.

Editors have been experimenting with ways to rethink the peer-reviewing process. Ideas include paying reviewers to review, distributing review tasks among reviewers (on project proposals), transparently posting reviews as in Nature, or giving virtual credits for reviews as with Publon. However, in one aspect, journals have apparently not experimented a lot: how to assign submitted papers to qualified reviewers.
The standard approach for reviewer selection is to match signed-up referees with submitted papers using a keyword search, the paper’s reference list, or the editors’ knowledge of the field and community. Reviewers are invited to review only one paper at a time—but often en masse to secure enough reviews—and if they decline, someone else may be invited. It’s an unproductive process.
Choice in work task allocation can improve performance
Inspired by our ongoing research on giving workers more choice in work task allocation in a manufacturing setting, it struck me that academic referees have limited choices when asked to review a paper for a journal. It’s basically a “yes, I take it” or “no, I won’t.” They are only given the choice of accepting or rejecting one paper from a journal at a time. That seems to be the modus operandi across all disciplines I have encountered.
In our study in a factory context, productivity increased when workers could choose among several job tasks. The manufacturer we worked with had implemented a smartwatch-based work task allocation system. Workers wore smartwatches showing open work tasks that they could accept or reject. In a field experiment, we provided some workers the opportunity to select from a menu of open tasks instead of only one. Our results showed that giving choice improved work performance.
A new approach: Reviewer’s choice
Similar to the manufacturing setting, academic reviewers might also do better in a system that empowers them with options. One way to improve peer review may be as simple as presenting potential referees with a few submitted papers’ titles and abstracts to choose from for review.
The benefits of choice in reviewer allocation are realistic: Referees are more likely to accept a review when asked to select one among several, and their resulting review reports should be more timely and developmental when they are genuinely curious about the topic. For example, reviewers could choose one among a limited set of titles and abstracts that fit their area of domain or methodological expertise.
Taking it further, publishers could consider pooling submissions from several journals in a cross-journal submission and peer review platform. This could help make the review process focus on the research, not where it’s submitted—aligned with the San Francisco Declaration on Research Assessment (DORA). I note that a review platform offering reviewers more choices may be better implemented as double-blind reviews rather than single-blind to reduce biases based on affiliations and names.

What can go wrong
In light of the increased pressure on the publishing process, rethinking the peer review process is important in its own right. However, shifting to an alternative system based on choice introduces a few new challenges. First, there is the risk of authors exposing ideas to a broader set of reviewers, who may be more interested in getting ideas for their next project than engaging in a constructive reviewing process. Relatedly, if the platform is cross-journal, authors may be hesitant to expose their work to many reviewers in case of a rejection. Second, if competing to get quick reviews, authors may be tempted to use clickbait titles and abstracts—although this may backfire on the authors when reviewers don’t find what they expected in the papers. Third, marginalized or new topics may find no interested reviewers. As in the classic review process, such papers can still be handled by editors in parallel. While there are obstacles that should be considered, testing a solution should be low in risk.
Call to action
Publishers already have multi-journal submission platforms, making it easier for authors to submit papers to a range of journals or transfer manuscripts between them. Granting more choices to reviewers as well should be technically easy to implement. The simplest way would be to use the current platforms to assign reviewers a low number of papers and ask them to choose one. A downside could be extended turnaround times, so pooling papers across a subset of journals could be beneficial.
For success, the reviewers should be vetted and accept a code of conduct. The journal editors must accept that their journals will be reviewed at the same level and with the same scrutiny as other journals in the pool. Perhaps there could be tit-for-tat guidelines, like completing two constructive reviews or more for each paper an author team submits for review. Such rules could work when there is an economy of scale in journals, reviewers, and papers.
Editors, who will try it first?
Citation: Netland (2025) Reviewer’s choice, Inside Higher Ed, April 10, 2025, online, https://www.insidehighered.com/opinion/views/2025/04/10/improve-peer-review-give-reviewers-more-choice-opinion
What's on your mind?