SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Academic publishing

479

Open access, open data, open source, and other open scholarship practices are growing in popularity and necessity. However, widespread adoption of these practices has not yet been achieved. One reason is that researchers are uncertain about how sharing their work will affect their careers. We review literature demonstrating that open research is associated with increases in citations, media attention, potential collaborators, job opportunities, and funding opportunities. These findings are evidence that open research practices bring significant benefits to researchers relative to more traditional closed practices.

Concepts: Scientific method, Academic publishing, Research, Open source, Open content, Open Notebook Science, Open research, Open Data

401

The relationship between traditional metrics of research impact (e.g., number of citations) and alternative metrics (altmetrics) such as Twitter activity are of great interest, but remain imprecisely quantified. We used generalized linear mixed modeling to estimate the relative effects of Twitter activity, journal impact factor, and time since publication on Web of Science citation rates of 1,599 primary research articles from 20 ecology journals published from 2012-2014. We found a strong positive relationship between Twitter activity (i.e., the number of unique tweets about an article) and number of citations. Twitter activity was a more important predictor of citation rates than 5-year journal impact factor. Moreover, Twitter activity was not driven by journal impact factor; the ‘highest-impact’ journals were not necessarily the most discussed online. The effect of Twitter activity was only about a fifth as strong as time since publication; accounting for this confounding factor was critical for estimating the true effects of Twitter use. Articles in impactful journals can become heavily cited, but articles in journals with lower impact factors can generate considerable Twitter activity and also become heavily cited. Authors may benefit from establishing a strong social media presence, but should not expect research to become highly cited solely through social media promotion. Our research demonstrates that altmetrics and traditional metrics can be closely related, but not identical. We suggest that both altmetrics and traditional citation rates can be useful metrics of research impact.

Concepts: Academic publishing, Nature, Impact factor

391

 To examine changes in representation of women among first authors of original research published in high impact general medical journals from 1994 to 2014 and investigate differences between journals.

Concepts: Scientific method, Observational study, Academic publishing, Research, Woman, Author

327

Background. Attribution to the original contributor upon reuse of published data is important both as a reward for data creators and to document the provenance of research findings. Previous studies have found that papers with publicly available datasets receive a higher number of citations than similar studies without available data. However, few previous analyses have had the statistical power to control for the many variables known to predict citation rate, which has led to uncertain estimates of the “citation benefit”. Furthermore, little is known about patterns in data reuse over time and across datasets. Method and Results. Here, we look at citation rates while controlling for many known citation predictors and investigate the variability of data reuse. In a multivariate regression on 10,555 studies that created gene expression microarray data, we found that studies that made data available in a public repository received 9% (95% confidence interval: 5% to 13%) more citations than similar studies for which the data was not made available. Date of publication, journal impact factor, open access status, number of authors, first and last author publication history, corresponding author country, institution citation history, and study topic were included as covariates. The citation benefit varied with date of dataset deposition: a citation benefit was most clear for papers published in 2004 and 2005, at about 30%. Authors published most papers using their own datasets within two years of their first publication on the dataset, whereas data reuse papers published by third-party investigators continued to accumulate for at least six years. To study patterns of data reuse directly, we compiled 9,724 instances of third party data reuse via mention of GEO or ArrayExpress accession numbers in the full text of papers. The level of third-party data use was high: for 100 datasets deposited in year 0, we estimated that 40 papers in PubMed reused a dataset by year 2, 100 by year 4, and more than 150 data reuse papers had been published by year 5. Data reuse was distributed across a broad base of datasets: a very conservative estimate found that 20% of the datasets deposited between 2003 and 2007 had been reused at least once by third parties. Conclusion. After accounting for other factors affecting citation rate, we find a robust citation benefit from open data, although a smaller one than previously reported. We conclude there is a direct effect of third-party data reuse that persists for years beyond the time when researchers have published most of the papers reusing their own data. Other factors that may also contribute to the citation benefit are considered. We further conclude that, at least for gene expression microarray data, a substantial fraction of archived datasets are reused, and that the intensity of dataset reuse has been steadily increasing since 2003.

Concepts: Statistics, Academic publishing, Data, Data set, DNA microarray, Reuse, Recycling, Remanufacturing

311

We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%-40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.

Concepts: Scientific method, Mathematics, Academic publishing, Science, Research, Literature, Incentive, Exploratory research

304

The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

Concepts: Scientific method, Academic publishing, Assessment, Psychometrics, Peer review, Impact factor, Scientific journal, Open access

291

In the past few years there has been an ongoing debate as to whether the proliferation of open access (OA) publishing would damage the peer review system and put the quality of scientific journal publishing at risk. Our aim was to inform this debate by comparing the scientific impact of OA journals with subscription journals, controlling for journal age, the country of the publisher, discipline and (for OA publishers) their business model.

Concepts: Academic publishing, Peer review, Scientific journal, Publishing, Open access

281

The existence of ‘Learning Styles’ is a common ‘neuromyth’, and their use in all forms of education has been thoroughly and repeatedly discredited in the research literature. However, anecdotal evidence suggests that their use remains widespread. This perspective article is an attempt to understand if and why the myth of Learning Styles persists. I have done this by analyzing the current research literature to capture the picture that an educator would encounter were they to search for “Learning Styles” with the intent of determining whether the research evidence supported their use. The overwhelming majority (89%) of recent research papers, listed in the ERIC and PubMed research databases, implicitly or directly endorse the use of Learning Styles in Higher Education. These papers are dominated by the VAK and Kolb Learning Styles inventories. These presence of these papers in the pedagogical literature demonstrates that an educator, attempting to take an evidence-based approach to education, would be presented with a strong yet misleading message that the use of Learning Styles is endorsed by the current research literature. This has potentially negative consequences for students and for the field of education research.

Concepts: Education, Academic publishing, Educational psychology, Higher education, Teacher, Evidence, Anecdotal evidence, Learning styles

264

The number of retracted scientific publications has risen sharply, but it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn.

Concepts: Academic publishing, Publication

259

Despite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method to quantify the influence of a research article by making novel use of its co-citation network to field-normalize the number of citations it has received. Article citation rates are divided by an expected citation rate that is derived from performance of articles in the same field and benchmarked to a peer comparison group. The resulting Relative Citation Ratio is article level and field independent and provides an alternative to the invalid practice of using journal impact factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010 and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov.

Concepts: Mathematics, Academic publishing, Research, Impact factor, National Institutes of Health, Division, Citation, Quotient