Concept: Writing occupations
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 3 years ago
Peer review may be “single-blind,” in which reviewers are aware of the names and affiliations of paper authors, or “double-blind,” in which this information is hidden. Noting that computer science research often appears first or exclusively in peer-reviewed conferences rather than journals, we study these two reviewing models in the context of the 10th Association for Computing Machinery International Conference on Web Search and Data Mining, a highly selective venue (15.6% acceptance rate) in which expert committee members review full-length submissions for acceptance. We present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers and preferentially bid for papers from top universities and companies. Once papers are allocated to reviewers, single-blind reviewers are significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are tangible, at 1.63, 1.58, and 2.10, respectively.
To assess whether reports from reviewers recommended by authors show a bias in quality and recommendation for editorial decision, compared with reviewers suggested by other parties, and whether reviewer reports for journals operating on open or single-blind peer review models differ with regard to report quality and reviewer recommendations.
We completed a scoping review on the barriers and facilitators to use of systematic reviews by health care managers and policy makers, including consideration of format and content, to develop recommendations for systematic review authors and to inform research efforts to develop and test formats for systematic reviews that may optimise their uptake.
Systematic reviews are popular. A recent estimate indicates that 11 new systematic reviews are published daily. Nevertheless, evidence indicates that the quality of reporting of systematic reviews is not optimal. One likely reason is that the authors' reports have received inadequate peer review. There are now many different types of systematic reviews and peer reviewing them can be enhanced by using a reporting guideline to supplement whatever template the journal editors have asked you, as a peer reviewer, to use. Additionally, keeping up with the current literature, whether as a content expert or being aware of advances in systematic review methods is likely be make for a more comprehensive and effective peer review. Providing a brief summary of what the systematic review has reported is an important first step in the peer review process (and not performed frequently enough). At its core, it provides the authors with some sense of what the peer reviewer believes was performed (Methods) and found (Results). Importantly, it also provides clarity regarding any potential problems in the methods, including statistical approaches for meta-analysis, results, and interpretation of the systematic review, for which the peer reviewer can seek explanations from the authors; these clarifications are best presented as questions to the authors.
To assess, in a sample of systematic reviews of non-pharmacological interventions, the completeness of intervention reporting, identify the most frequently missing elements, and assess review authors' use of and beliefs about providing intervention information.
To determine whether librarian and information specialist authorship was associated with better reported systematic review (SR) search quality.
The author shares twelve practical tips on how to navigate the process of getting a manuscript published. These tips, which apply to all fields of academic writing, advise that during the initial preparation phase authors should: (1) plan early to get it out the door; (2) address authorship and writing group expectations up front; (3) maintain control of the writing; (4) ensure complete reporting; (5) use electronic reference management software; (6) polish carefully before they submit; (7) select the right journal; and (8) follow journal instructions precisely. Rejection after the first submission is likely, and when this occurs authors should (9) get it back out the door quickly, but first (10) take seriously all reviewer and editor suggestions. Finally, when the invitation comes to revise and resubmit, authors should (11) respond carefully to every reviewer suggestion, even if they disagree, and (12) get input from others as they revise. The author also shares detailed suggestions on the creation of effective tables and figures, and on how to respond to reviewer critiques.
The review process can be completely open, double-blinded, or somewhere in between. Double-blinded peer review, where neither the authors' nor peer reviewers' identities are shared with each other, is thought to be the fairest system, but there is evidence that it does not affect reviewer behavior or influence decisions. Furthermore, even without presenting author names, authorship is often apparent to reviewers, especially in small specialties. In conjunction with Plastic and Reconstructive Surgery (PRS), we examined the effect of double-blinded review on review quality, reviewer publishing recommendation, and reviewer manuscript rating. We hypothesized that double-blinded review will not improve review quality and will not affect recommendation or rating.
Peer reviewers sometimes request that authors cite their work, either appropriately or via coercive self-citation to highlight the reviewers' work. The objective of this study was to determine in peer reviews submitted to one biomedical journal (1) the extent of peer reviewer self-citation; (2) the proportion of reviews recommending revision or acceptance versus rejection that included reviewer self-citations; and (3) the proportion of reviewer self-citations versus citations to others that included a rationale.
To describe how systematic review authors report and address categories of participants with potential missing outcome data of trial participants.