Objective To examine how poor reporting and inadequate methods for key methodological features in randomised controlled trials (RCTs) have changed over the past three decades.Design Mapping of trials included in Cochrane reviews.Data sources Data from RCTs included in all Cochrane reviews published between March 2011 and September 2014 reporting an evaluation of the Cochrane risk of bias items: sequence generation, allocation concealment, blinding, and incomplete outcome data.Data extraction For each RCT, we extracted consensus on risk of bias made by the review authors and identified the primary reference to extract publication year and journal. We matched journal names with Journal Citation Reports to get 2014 impact factors.Main outcomes measures We considered the proportions of trials rated by review authors at unclear and high risk of bias as surrogates for poor reporting and inadequate methods, respectively.Results We analysed 20 920 RCTs (from 2001 reviews) published in 3136 journals. The proportion of trials with unclear risk of bias was 48.7% for sequence generation and 57.5% for allocation concealment; the proportion of those with high risk of bias was 4.0% and 7.2%, respectively. For blinding and incomplete outcome data, 30.6% and 24.7% of trials were at unclear risk and 33.1% and 17.1% were at high risk, respectively. Higher journal impact factor was associated with a lower proportion of trials at unclear or high risk of bias. The proportion of trials at unclear risk of bias decreased over time, especially for sequence generation, which fell from 69.1% in 1986-1990 to 31.2% in 2011-14 and for allocation concealment (70.1% to 44.6%). After excluding trials at unclear risk of bias, use of inadequate methods also decreased over time: from 14.8% to 4.6% for sequence generation and from 32.7% to 11.6% for allocation concealment.Conclusions Poor reporting and inadequate methods have decreased over time, especially for sequence generation and allocation concealment. But more could be done, especially in lower impact factor journals.
The Journal Impact Factor (JIF) is a single citation metric, which is widely employed for ranking journals and choosing target journals, but is also misused as the proxy of the quality of individual articles and academic achievements of authors. This article analyzes Scopus-based publication activity on the JIF and overviews some of the numerous misuses of the JIF, global initiatives to overcome the ‘obsession’ with impact factors, and emerging strategies to revise the concept of the scholarly impact. The growing number of articles on the JIF, most of which are in English, reflects interest of experts in journal editing and scientometrics toward its uses, misuses, and options to overcome related problems. Solely displaying values of the JIFs on the journal websites is criticized by experts as these average metrics do not reflect skewness of citation distribution of individual articles. Emerging strategies suggest to complement the JIFs with citation plots and alternative metrics, reflecting uses of individual articles in terms of downloads and distribution of related information through social media and networking platforms. It is also proposed to revise the original formula of the JIF calculation and embrace the concept of the impact and importance of individual articles. The latter is largely dependent on ethical soundness of the journal instructions, proper editing and structuring of articles, efforts to promote related information through social media, and endorsements of professional societies.
Many fields face an increasing prevalence of multi-authorship, and this poses challenges in assessing citation metrics. Here, we explore multiple citation indicators that address total impact (number of citations, Hirsch H index [H]), co-authorship adjustment (Schreiber Hm index [Hm]), and author order (total citations to papers as single; single or first; or single, first, or last author). We demonstrate the correlation patterns between these indicators across 84,116 scientists (those among the top 30,000 for impact in a single year  in at least one of these indicators) and separately across 12 scientific fields. Correlation patterns vary across these 12 fields. In physics, total citations are highly negatively correlated with indicators of co-authorship adjustment and of author order, while in other sciences the negative correlation is seen only for total citation impact and citations to papers as single author. We propose a composite score that sums standardized values of these six log-transformed indicators. Of the 1,000 top-ranked scientists with the composite score, only 322 are in the top 1,000 based on total citations. Many Nobel laureates and other extremely influential scientists rank among the top-1,000 with the composite indicator, but would rank much lower based on total citations. Conversely, many of the top 1,000 authors on total citations have had no single/first/last-authored cited paper. More Nobel laureates of 2011-2015 are among the top authors when authors are ranked by the composite score than by total citations, H index, or Hm index; 40/47 of these laureates are among the top 30,000 by at least one of the six indicators. We also explore the sensitivity of indicators to self-citation and alphabetic ordering of authors in papers across different scientific fields. Multiple indicators and their composite may give a more comprehensive picture of impact, although no citation indicator, single or composite, can be expected to select all the best scientists.
In this paper we present the first comprehensive bibliometric analysis of eleven open-access mega-journals (OAMJs). OAMJs are a relatively recent phenomenon, and have been characterised as having four key characteristics: large size; broad disciplinary scope; a Gold-OA business model; and a peer-review policy that seeks to determine only the scientific soundness of the research rather than evaluate the novelty or significance of the work. Our investigation focuses on four key modes of analysis: journal outputs (the number of articles published and changes in output over time); OAMJ author characteristics (nationalities and institutional affiliations); subject areas (the disciplinary scope of OAMJs, and variations in sub-disciplinary output); and citation profiles (the citation distributions of each OAMJ, and the impact of citing journals). We found that while the total output of the eleven mega-journals grew by 14.9% between 2014 and 2015, this growth is largely attributable to the increased output of Scientific Reports and Medicine. We also found substantial variation in the geographical distribution of authors. Several journals have a relatively high proportion of Chinese authors, and we suggest this may be linked to these journals' high Journal Impact Factors (JIFs). The mega-journals were also found to vary in subject scope, with several journals publishing disproportionately high numbers of articles in certain sub-disciplines. Our citation analsysis offers support for Björk & Catani’s suggestion that OAMJs’s citation distributions can be similar to those of traditional journals, while noting considerable variation in citation rates across the eleven titles. We conclude that while the OAMJ term is useful as a means of grouping journals which share a set of key characteristics, there is no such thing as a “typical” mega-journal, and we suggest several areas for additional research that might help us better understand the current and future role of OAMJs in scholarly communication.
The journal impact factor (JIF), and how best to rate the performance of a journal and the articles they contain, are areas of great debate. The aim of this paper was to assess various ranking methods of journal quality for mental health nursing journals, and to list the top 10 articles that have received the most number of citations to date. Seven mental health nursing journals were chosen for the analysis of citations they received in 2010, as well as their current impact factors from two sources, and other data for ranking purposes. There was very little difference in the top four mental health nursing journals and their overall rankings when combining various bibliometric indicators. That said, the International Journal of Mental Health Nursing is currently the highest ranked mental health nursing journal based on JIF, but publishes fewer articles per year compared to other journals. Overall, very few articles received 50 or more citations. This study shows that researchers need to consider more than one ranking method when deciding where to send or publish their research.
How to quantify the impact of a researcher’s or an institution’s body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as “very high research activity”. Our approach has three distinct advantages. First, it accurately captures the overall scientific impact of researchers at all career stages, as measured by asymptotic citation counts. Second, unlike other measures, our indicator is resistant to manipulation and rewards publication quality over quantity. Third, our approach captures the time-evolution of the scientific impact of research institutions.
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.
Bibliometric indicators increasingly affect careers, funding, and reputation of individuals, their institutions and journals themselves. In contrast to author self-citations, little is known about kinetics of journal self-citations. Here we hypothesized that they may show a generalizable pattern within particular research fields or across multiple fields. We thus analyzed self-cites to 60 journals from three research fields (multidisciplinary sciences, parasitology, and information science). We also hypothesized that the kinetics of journal self-citations and citations received from other journals of the same publisher may differ from foreign citations. We analyzed the journals published the American Association for the Advancement of Science, Nature Publishing Group, and Editura Academiei Române. We found that although the kinetics of journal self-cites is generally faster compared to foreign cites, it shows some field-specific characteristics. Particularly in information science journals, the initial increase in a share of journal self-citations during post-publication year 0 was completely absent. Self-promoting journal self-citations of top-tier journals have rather indirect but negligible direct effects on bibliometric indicators, affecting just the immediacy index and marginally increasing the impact factor itself as long as the affected journals are well established in their fields. In contrast, other forms of journal self-citations and citation stacking may severely affect the impact factor, or other citation-based indices. We identified here a network consisting of three Romanian physics journals Proceedings of the Romanian Academy, Series A, Romanian Journal of Physics, and Romanian Reports in Physics, which displayed low to moderate ratio of journal self-citations, but which multiplied recently their impact factors, and were mutually responsible for 55.9%, 64.7% and 63.3% of citations within the impact factor calculation window to the three journals, respectively. They did not receive nearly any network self-cites prior impact factor calculation window, and their network self-cites decreased sharply after the impact factor calculation window. Journal self-citations and citation stacking requires increased attention and elimination from citation indices.
We have generated a list of highly influential biomedical researchers based on Scopus citation data from the period 1996-2011. Of the 15,153,100 author identifiers in Scopus, approximately 1% (n=149,655) have an h-index >=20. Of those, we selected 532 authors who belonged to the 400 with highest total citation count (>=25,142 citations) and/or the 400 with highest h-index (>=76). Of those, we selected the top-400 living core biomedical researchers based on a normalized score combining total citations and h-index. Another 62 authors whose focus is outside biomedicine had a normalized score that was at least as high as the score of the 400th core biomedical researcher. We provide information on the profile of these most influential authors, including the most common Medical Subject Heading terms in their articles that are also specific to their work, most common journals where they publish, number of papers with over 100 citations that they have published as first/single, last, or middle authors, and impact score adjusted for authorship positions, given that crude citation indices and authorship positions are almost totally orthogonal. We also show for each researcher the distribution of their papers across 4 main levels (basic-to-applied) of research. We discuss technical issues, limitations and caveats, comparisons against other lists of highly-cited researchers, and potential uses of this resource.
Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics.