- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 3 years ago
Peer review may be “single-blind,” in which reviewers are aware of the names and affiliations of paper authors, or “double-blind,” in which this information is hidden. Noting that computer science research often appears first or exclusively in peer-reviewed conferences rather than journals, we study these two reviewing models in the context of the 10th Association for Computing Machinery International Conference on Web Search and Data Mining, a highly selective venue (15.6% acceptance rate) in which expert committee members review full-length submissions for acceptance. We present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers and preferentially bid for papers from top universities and companies. Once papers are allocated to reviewers, single-blind reviewers are significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are tangible, at 1.63, 1.58, and 2.10, respectively.
Recent reports suggest that peer reviews of National Institutes of Health grant applications are at best imprecise predictors of research projects' scientific impact. But these findings may not mean that peer review is failing.
Systematic reviews are generally placed above narrative reviews in an assumed hierarchy of secondary research evidence. We argue that systematic reviews and narrative reviews serve different purposes and should be viewed as complementary. Conventional systematic reviews address narrowly focused questions; their key contribution is summarising data. Narrative reviews provide interpretation and critique; their key contribution is deepening understanding. This article is protected by copyright. All rights reserved.
Systematic reviews are popular. A recent estimate indicates that 11 new systematic reviews are published daily. Nevertheless, evidence indicates that the quality of reporting of systematic reviews is not optimal. One likely reason is that the authors' reports have received inadequate peer review. There are now many different types of systematic reviews and peer reviewing them can be enhanced by using a reporting guideline to supplement whatever template the journal editors have asked you, as a peer reviewer, to use. Additionally, keeping up with the current literature, whether as a content expert or being aware of advances in systematic review methods is likely be make for a more comprehensive and effective peer review. Providing a brief summary of what the systematic review has reported is an important first step in the peer review process (and not performed frequently enough). At its core, it provides the authors with some sense of what the peer reviewer believes was performed (Methods) and found (Results). Importantly, it also provides clarity regarding any potential problems in the methods, including statistical approaches for meta-analysis, results, and interpretation of the systematic review, for which the peer reviewer can seek explanations from the authors; these clarifications are best presented as questions to the authors.
Peer review is the gold standard for scientific communication, but its ability to guarantee the quality of published research remains difficult to verify. Recent modeling studies suggest that peer review is sensitive to reviewer misbehavior, and it has been claimed that referees who sabotage work they perceive as competition may severely undermine the quality of publications. Here we examine which aspects of suboptimal reviewing practices most strongly impact quality, and test different mitigating strategies that editors may employ to counter them. We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones. Bypassing or blacklisting bad reviewers and consulting additional reviewers to settle disagreements can reduce but not eliminate the impact. The other editorial strategies we tested do not significantly improve quality, but pairing manuscripts to reviewers unlikely to selfishly reject them and allowing revision of rejected manuscripts minimize rejection of above-average manuscripts. In its current form, peer review offers few incentives for impartial reviewing efforts. Editors can help, but structural changes are more likely to have a stronger impact.
Peer review is an important element of scientific communication but deserves quantitative examination. We used data from the handling service manuscript Central for ten mid-tier ecology and evolution journals to test whether number of external reviews completed improved citation rates for all accepted manuscripts. Contrary to a previous study examining this issue using resubmission data as a proxy for reviews, we show that citation rates of manuscripts do not correlate with the number of individuals that provided reviews. Importantly, externally-reviewed papers do not outperform editor-only reviewed published papers in terms of visibility within a 5-year citation window. These findings suggest that in many instances editors can be all that is needed to review papers (or at least conduct the critical first review to assess general suitability) if the purpose of peer review is to primarily filter and that journals can consider reducing the number of referees associated with reviewing ecology and evolution papers.
Systematic reviews, with or without meta-analysis, play an important role today in synthesizing cancer research and are frequently used to guide decision-making. However, there is now an increase in the number of systematic reviews on the same topic, thereby necessitating a systematic review of previous systematic reviews. With a focus on cancer, the purpose of this article is to provide a practical, stepwise approach for systematically reviewing the literature and publishing the results. This starts with the registration of a protocol for a systematic review of previous systematic reviews and ends with the publication of an original or updated systematic review, with or without meta-analysis, in a peer-reviewed journal. Future directions as well as potential limitations of the approach are also discussed. It is hoped that the stepwise approach presented in this article will be helpful to both producers and consumers of cancer-related systematic reviews and will contribute to the ultimate goal of preventing and treating cancer.
BACKGROUND: Prior efforts to train medical journal peer reviewers have not improved subsequent review quality, although such interventions were general and brief. We hypothesized that a manuscript-specific and more extended intervention pairing new reviewers with high-quality senior reviewers as mentors would improve subsequent review quality. METHODS: Over a four-year period we randomly assigned all new reviewers for Annals of Emergency Medicine to receive our standard written informational materials alone, or these materials plus a new mentoring intervention. For this program we paired new reviewers with a high-quality senior reviewer for each of their first three manuscript reviews, and asked mentees to discuss their review with their mentor by email or phone. We then compared the quality of subsequent reviews between the control and intervention groups, using linear mixed effects models of the slopes of review quality scores over time. RESULTS: We studied 490 manuscript reviews, with similar baseline characteristics between the 24 mentees who completed the trial and the 22 control reviewers. Mean quality scores for the first 3 reviews on our 1 to 5 point scale were similar between control and mentee groups (3.4 versus 3.5), as were slopes of change of review scores (-0.229 versus -0.549) and all other secondary measures of reviewer performance. CONCLUSIONS: A structured training intervention of pairing newly recruited medical journal peer reviewers with senior reviewer mentors did not improve the quality of their subsequent reviews.
To draw from systematic and other literature reviews to identify, describe, and critique nonpharmacological practices to address behavioral and psychological symptoms of dementia (BPSDs) and provide evidence-based recommendations for dementia care especially useful for potential adopters.
- Journal of radiological protection : official journal of the Society for Radiological Protection
- Published over 3 years ago
Reviewers for Journal of Radiological Protection (JRP) are now able to track, verify and showcase their peer review contributions. IOP Publishing has partnered with Publons, a free service that enables reviewers to seamlessly have their reviewer records updated in Publons with the click of a button. As of 2017, reviewers for JRP will get the option to have a verified record of each review they do added to their Publons profile. Recognition will be given even if the reviews are anonymous and the manuscript is never published. By default, only the name of the journal and the year of the review will be displayed, so anonymity of the reviewers is completely protected. To find out more visit: www.publons.com/in/iop.