The number of retracted scientific publications has risen sharply, but it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn.
Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal’s impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal’s impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article’s online appearance, rather than on that of its publication in print.
A graphical abstract (GA) represents a piece of artwork that is intended to summarize the main findings of an article for readers at a single glance. Many publishers currently encourage authors to supplement their articles with GAs, in the hope that such a convenient visual summary will facilitate readers with a clearer outline of papers that are of interest and will result in improved overall visibility of the respective publication. To test this assumption, we statistically compared publications with or without GA published in Molecules between March 2014 and March 2015 with regard to several output parameters reflecting visibility. Contrary to our expectations, manuscripts published without GA performed significantly better in terms of PDF downloads, abstract views, and total citations than manuscripts with GA. To the best of our knowledge, this is the first empirical study on the effectiveness of GA for attracting attention to scientific publications.
- Journal of occupational medicine and toxicology (London, England)
- Published over 6 years ago
Bicycle traumata are very common and especially neurologic complications lead to disability and death in all stages of the life. This review assembles the most recent findings concerning research in the field of bicycle traumata combined with the factor of bicycle helmet use. The area of bicycle trauma research is by nature multidisciplinary and relevant not only for physicians but also for experts with educational, engineering, judicial, rehabilitative or public health functions. Due to this plurality of global publications and special subjects, short time reviews help to detect recent research directions and provide also information from neighbour disciplines for researchers. It can be stated that to date, that although a huge amount of research has been conducted in this area more studies are needed to evaluate and improve special conditions and needs in different regions, ages, nationalities and to create successful prevention programs of severe head and face injuries while cycling.Focus was explicit the bicycle helmet use, wherefore sledding, ski and snowboard studies were excluded and only one study concerning electric bicycles remained due to similar motion structures within this review. The considered studies were all published between January 2010 and August 2011 and were identified via the online databases Medline PubMed and ISI Web of Science.
Science is facing a “replication crisis” in which many experimental findings cannot be replicated and are likely to be false. Does this imply that many scientific facts are false as well? To find out, we explore the process by which a claim becomes fact. We model the community’s confidence in a claim as a Markov process with successive published results shifting the degree of belief. Publication bias in favor of positive findings influences the distribution of published results. We find that unless a sufficient fraction of negative results are published, false claims frequently can become canonized as fact. Data-dredging, p-hacking, and similar behaviors exacerbate the problem. Should negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims would be more readily distinguished. To the degree that the model reflects the real world, there may be serious concerns about the validity of purported facts in some disciplines.
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 3 years ago
Scientific publications enable results and ideas to be transmitted throughout the scientific community. The number and type of journal publications also have become the primary criteria used in evaluating career advancement. Our analysis suggests that publication practices have changed considerably in the life sciences over the past 30 years. More experimental data are now required for publication, and the average time required for graduate students to publish their first paper has increased and is approaching the desirable duration of PhD training. Because publication is generally a requirement for career progression, schemes to reduce the time of graduate student and postdoctoral training may be difficult to implement without also considering new mechanisms for accelerating communication of their work. The increasing time to publication also delays potential catalytic effects that ensue when many scientists have access to new information. The time has come for life scientists, funding agencies, and publishers to discuss how to communicate new findings in a way that best serves the interests of the public and the scientific community.
Some scholars add authors to their research papers or grant proposals even when those individuals contribute nothing to the research effort. Some journal editors coerce authors to add citations that are not pertinent to their work and some authors pad their reference lists with superfluous citations. How prevalent are these types of manipulation, why do scholars stoop to such practices, and who among us is most susceptible to such ethical lapses? This study builds a framework around how intense competition for limited journal space and research funding can encourage manipulation and then uses that framework to develop hypotheses about who manipulates and why they do so. We test those hypotheses using data from over 12,000 responses to a series of surveys sent to more than 110,000 scholars from eighteen different disciplines spread across science, engineering, social science, business, and health care. We find widespread misattribution in publications and in research proposals with significant variation by academic rank, discipline, sex, publication history, co-authors, etc. Even though the majority of scholars disapprove of such tactics, many feel pressured to make such additions while others suggest that it is just the way the game is played. The findings suggest that certain changes in the review process might help to stem this ethical decline, but progress could be slow.
In a 2005 paper that has been accessed more than a million times, John Ioannidis explained why most published research findings were false. Here he revisits the topic, this time to address how to improve matters. Please see later in the article for the Editors' Summary.
Scientific societies provide numerous services to the scientific enterprise, including convening meetings, publishing journals, developing scientific programs, advocating for science, promoting education, providing cohesion and direction for the discipline, and more. For most scientific societies, publishing provides revenues that support these important activities. In recent decades, the proportion of papers on microbiology published in scientific society journals has declined. This is largely due to two competing pressures: authors' drive to publish in “glam journals”-those with high journal impact factors-and the availability of “mega journals,” which offer speedy publication of articles regardless of their potential impact. The decline in submissions to scientific society journals and the lack of enthusiasm on the part of many scientists to publish in them should be matters of serious concern to all scientists because they impact the service that scientific societies can provide to their members and to science.
Debates over the pros and cons of a “publish or perish” philosophy have inflamed academia for at least half a century. Growing concerns, in particular, are expressed for policies that reward “quantity” at the expense of “quality,” because these might prompt scientists to unduly multiply their publications by fractioning (“salami slicing”), duplicating, rushing, simplifying, or even fabricating their results. To assess the reasonableness of these concerns, we analyzed publication patterns of over 40,000 researchers that, between the years 1900 and 2013, have published two or more papers within 15 years, in any of the disciplines covered by the Web of Science. The total number of papers published by researchers during their early career period (first fifteen years) has increased in recent decades, but so has their average number of co-authors. If we take the latter factor into account, by measuring productivity fractionally or by only counting papers published as first author, we observe no increase in productivity throughout the century. Even after the 1980s, adjusted productivity has not increased for most disciplines and countries. These results are robust to methodological choices and are actually conservative with respect to the hypothesis that publication rates are growing. Therefore, the widespread belief that pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, duplicated, plagiarized and false results is likely to be incorrect or at least exaggerated.