Clarity and accuracy of reporting are fundamental to the scientific process. Readability formulas can estimate how difficult a text is to read. Here, in a corpus consisting of 709,577 abstracts published between 1881 and 2015 from 123 scientific journals, we show that the readability of science is steadily decreasing. Our analyses show that this trend is indicative of a growing use of general scientific jargon. These results are concerning for scientists and for the wider public, as they impact both the reproducibility and accessibility of research findings.
Open access, open data, open source, and other open scholarship practices are growing in popularity and necessity. However, widespread adoption of these practices has not yet been achieved. One reason is that researchers are uncertain about how sharing their work will affect their careers. We review literature demonstrating that open research is associated with increases in citations, media attention, potential collaborators, job opportunities, and funding opportunities. These findings are evidence that open research practices bring significant benefits to researchers relative to more traditional closed practices.
We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants and our analyses suggest that funding rates below approximately 20%, commensurate with current NIH and NSF funding, are likely to drive at least half of the active researchers away from federally funded research. We conclude with recommendations and suggestions for individual investigators and for department heads.
Industry sponsors' financial interests might bias the conclusions of scientific research. We examined whether financial industry funding or the disclosure of potential conflicts of interest influenced the results of published systematic reviews (SRs) conducted in the field of sugar-sweetened beverages (SSBs) and weight gain or obesity.
To examine changes in representation of women among first authors of original research published in high impact general medical journals from 1994 to 2014 and investigate differences between journals.
A paper from the Open Science Collaboration (Research Articles, 28 August 2015, aac4716) attempting to replicate 100 published studies suggests that the reproducibility of psychological science is surprisingly low. We show that this article contains three statistical errors and provides no support for such a conclusion. Indeed, the data are consistent with the opposite conclusion, namely, that the reproducibility of psychological science is quite high.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 1 year ago
This work examines the contribution of NIH funding to published research associated with 210 new molecular entities (NMEs) approved by the Food and Drug Administration from 2010-2016. We identified >2 million publications in PubMed related to the 210 NMEs (n= 131,092) or their 151 known biological targets (n= 1,966,281). Of these, >600,000 (29%) were associated with NIH-funded projects in RePORTER. This funding included >200,000 fiscal years of NIH project support (1985-2016) and project costs >$100 billion (2000-2016), representing ∼20% of the NIH budget over this period. NIH funding contributed to every one of the NMEs approved from 2010-2016 and was focused primarily on the drug targets rather than on the NMEs themselves. There were 84 first-in-class products approved in this interval, associated with >$64 billion of NIH-funded projects. The percentage of fiscal years of project funding identified through target searches, but not drug searches, was greater for NMEs discovered through targeted screening than through phenotypic methods (95% versus 82%). For targeted NMEs, funding related to targets preceded funding related to the NMEs, consistent with the expectation that basic research provides validated targets for targeted screening. This analysis, which captures basic research on biological targets as well as applied research on NMEs, suggests that the NIH contribution to research associated with new drug approvals is greater than previously appreciated and highlights the risk of reducing federal funding for basic biomedical research.
We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%-40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.
Despite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method to quantify the influence of a research article by making novel use of its co-citation network to field-normalize the number of citations it has received. Article citation rates are divided by an expected citation rate that is derived from performance of articles in the same field and benchmarked to a peer comparison group. The resulting Relative Citation Ratio is article level and field independent and provides an alternative to the invalid practice of using journal impact factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010 and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov.
Numerous concerns have been raised about the sustainability of the biomedical research enterprise in the United States. Improving the postdoctoral training experience is seen as a priority in addressing these concerns, but even identifying who the postdocs are is made difficult by the multitude of different job titles they can carry. Here, we summarize the detrimental effects that current employment structures have on training, compensation and benefits for postdocs, and argue that academic research institutions should standardize the categorization and treatment of postdocs. We also present brief case studies of two institutions that have addressed these challenges and can provide models for other institutions attempting to enhance their postdoctoral workforces and improve the sustainability of the biomedical research enterprise.