SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Peer review

489

Peer review may be “single-blind,” in which reviewers are aware of the names and affiliations of paper authors, or “double-blind,” in which this information is hidden. Noting that computer science research often appears first or exclusively in peer-reviewed conferences rather than journals, we study these two reviewing models in the context of the 10th Association for Computing Machinery International Conference on Web Search and Data Mining, a highly selective venue (15.6% acceptance rate) in which expert committee members review full-length submissions for acceptance. We present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers and preferentially bid for papers from top universities and companies. Once papers are allocated to reviewers, single-blind reviewers are significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are tangible, at 1.63, 1.58, and 2.10, respectively.

Concepts: Scientific method, Experiment, Computer, Peer review, Review, Book review, Critic, Writing occupations

394

Industry sponsors' financial interests might bias the conclusions of scientific research. We examined whether financial industry funding or the disclosure of potential conflicts of interest influenced the results of published systematic reviews (SRs) conducted in the field of sugar-sweetened beverages (SSBs) and weight gain or obesity.

Concepts: Scientific method, Mathematics, Science, Research, The Association, Peer review, Review, Finance

302

The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

Concepts: Scientific method, Academic publishing, Assessment, Psychometrics, Peer review, Impact factor, Scientific journal, Open access

290

In the past few years there has been an ongoing debate as to whether the proliferation of open access (OA) publishing would damage the peer review system and put the quality of scientific journal publishing at risk. Our aim was to inform this debate by comparing the scientific impact of OA journals with subscription journals, controlling for journal age, the country of the publisher, discipline and (for OA publishers) their business model.

Concepts: Academic publishing, Peer review, Scientific journal, Publishing, Open access

264

The growth in scientific production may threaten the capacity for the scientific community to handle the ever-increasing demand for peer review of scientific publications. There is little evidence regarding the sustainability of the peer-review system and how the scientific community copes with the burden it poses. We used mathematical modeling to estimate the overall quantitative annual demand for peer review and the supply in biomedical research. The modeling was informed by empirical data from various sources in the biomedical domain, including all articles indexed at MEDLINE. We found that for 2015, across a range of scenarios, the supply exceeded by 15% to 249% the demand for reviewers and reviews. However, 20% of the researchers performed 69% to 94% of the reviews. Among researchers actually contributing to peer review, 70% dedicated 1% or less of their research work-time to peer review while 5% dedicated 13% or more of it. An estimated 63.4 million hours were devoted to peer review in 2015, among which 18.9 million hours were provided by the top 5% contributing reviewers. Our results support that the system is sustainable in terms of volume but emphasizes a considerable imbalance in the distribution of the peer-review effort across the scientific community. Finally, various individual interactions between authors, editors and reviewers may reduce to some extent the number of reviewers who are available to editors at any point.

Concepts: Scientific method, Mathematics, Academic publishing, Science, Research, Peer review, Review, Scientific community

251

Inaccurate data in scientific papers can result from honest error or intentional falsification. This study attempted to determine the percentage of published papers that contain inappropriate image duplication, a specific type of inaccurate data. The images from a total of 20,621 papers published in 40 scientific journals from 1995 to 2014 were visually screened. Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence of papers with problematic images has risen markedly during the past decade. Additional papers written by authors of papers with problematic images had an increased likelihood of containing problematic images as well. As this analysis focused only on one type of data, it is likely that the actual prevalence of inaccurate data in the published literature is higher. The marked variation in the frequency of problematic images among journals suggests that journal practices, such as prepublication image screening, influence the quality of the scientific literature.

Concepts: Academic publishing, Science, Peer review, Literature, Scientific journal, Publishing, Open access, Scientific literature

232

Increasing public interest in science information in a digital and 2.0 science era promotes a dramatically, rapid and deep change in science itself. The emergence and expansion of new technologies and internet-based tools is leading to new means to improve scientific methodology and communication, assessment, promotion and certification. It allows methods of acquisition, manipulation and storage, generating vast quantities of data that can further facilitate the research process. It also improves access to scientific results through information sharing and discussion. Content previously restricted only to specialists is now available to a wider audience. This context requires new management systems to make scientific knowledge more accessible and useable, including new measures to evaluate the reach of scientific information. The new science and research quality measures are strongly related to the new online technologies and services based in social media. Tools such as blogs, social bookmarks and online reference managers, Twitter and others offer alternative, transparent and more comprehensive information about the active interest, usage and reach of scientific publications. Another of these new filters is the Research Blogging platform, which was created in 2007 and now has over 1,230 active blogs, with over 26,960 entries posted about peer-reviewed research on subjects ranging from Anthropology to Zoology. This study takes a closer look at RB, in order to get insights into its contribution to the rapidly changing landscape of scientific communication.

Concepts: Scientific method, Mathematics, Epistemology, Science, Research, Peer review, Technology, Scientific literature

204

In August 2015, the publisher Springer retracted 64 articles from 10 different subscription journals “after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports,” according to a statement on their website.(1) The retractions came only months after BioMed Central, an open-access publisher also owned by Springer, retracted 43 articles for the same reason. “This is officially becoming a trend,” Alison McCook wrote on the blog Retraction Watch, referring to the increasing number of retractions due to fabricated peer reviews.(2) Since it was first reported 3 years ago, when South Korean researcher Hyung-in Moon admitted . . .

Concepts: Scientific method, Academic publishing, Peer review, Review, Scientific journal, Publishing, Open access, Retraction

180

A negative consequence of the rapid growth of scholarly open access publishing funded by article processing charges is the emergence of publishers and journals with highly questionable marketing and peer review practices. These so-called predatory publishers are causing unfounded negative publicity for open access publishing in general. Reports about this branch of e-business have so far mainly concentrated on exposing lacking peer review and scandals involving publishers and journals. There is a lack of comprehensive studies about several aspects of this phenomenon, including extent and regional distribution.

Concepts: Longitudinal study, Ecology, Sociology, Academic publishing, Academia, Peer review, Open source, Open access

172

The academic scandal on a study on stimulus‑triggered acquisition of pluripotency (STAP) cells in Japan in 2014 involved suspicions of scientific misconduct by the lead author of the study after the paper had been reviewed on a peer‑review website. This study investigated the discussions on STAP cells on Twitter and content of newspaper articles in an attempt to assess the role of social compared with traditional media in scientific peer review.

Concepts: Scientific method, Academia, Peer review, Review, Mass media, Journalism, Twitter, Scientific misconduct