Transparency in authors' contributions and responsibilities to promote integrity in scientific publication
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 1 year ago
In keeping with the growing movement in scientific publishing toward transparency in data and methods, we propose changes to journal authorship policies and procedures to provide insight into which author is responsible for which contributions, better assurance that the list is complete, and clearly articulated standards to justify earning authorship credit. To accomplish these goals, we recommend that journals adopt common and transparent standards for authorship, outline responsibilities for corresponding authors, adopt the Contributor Roles Taxonomy (CRediT) (docs.casrai.org/CRediT) methodology for attributing contributions, include this information in article metadata, and require authors to use the ORCID persistent digital identifier (https://orcid.org). Additionally, we recommend that universities and research institutions articulate expectations about author roles and responsibilities to provide a point of common understanding for discussion of authorship across research teams. Furthermore, we propose that funding agencies adopt the ORCID identifier and accept the CRediT taxonomy. We encourage scientific societies to further authorship transparency by signing on to these recommendations and promoting them through their meetings and publications programs.
Most scientific research is performed by teams, and for a long time, observers have inferred individual team members' contributions by interpreting author order on published articles. In response to increasing concerns about this approach, journals are adopting policies that require the disclosure of individual authors' contributions. However, it is not clear whether and how these disclosures improve upon the conventional approach. Moreover, there is little evidence on how contribution statements are written and how they are used by readers. We begin to address these questions in two studies. Guided by a conceptual model, Study 1 examines the relationship between author order and contribution statements on more than 12,000 articles to understand what information is provided by each. This analysis quantifies the risk of error when inferring contributions from author order and shows how this risk increases with team size and for certain types of authors. At the same time, the analysis suggests that some components of the value of contributions are reflected in author order but not in currently used contribution statements. Complementing the bibliometric analysis, Study 2 analyzes survey data from more than 6000 corresponding authors to examine how contribution statements are written and used. This analysis highlights important differences between fields and between senior versus junior scientists, as well as strongly diverging views about the benefits and limitations of contribution statements. On the basis of both studies, we highlight important avenues for future research and consider implications for a broad range of stakeholders.
Introduction. Researchers' productivity is usually measured in terms of their publication output. A minimum number of publications is required for some medical qualifications and professional appointments. However, authoring an unfeasibly large number of publications might indicate disregard of authorship criteria or even fraud. We therefore examined publication patterns of highly prolific authors in 4 medical specialties. Methods. We analysed Medline publications from 2008-12 using bespoke software to disambiguate individual authors focusing on 4 discrete topics (to further reduce the risk of combining publications from authors with the same name and affiliation). This enabled us to assess the number and type of publications per author per year. Results. While 99% of authors were listed on fewer than 20 publications in the 5-year period, 24 authors in the chosen areas were listed on at least 25 publications in a single year (i.e., >1 publication per 10 working days). Types of publication by the prolific authors varied but included substantial numbers of original research papers (not simply editorials or letters). Conclusions. Institutions and funders should be alert to unfeasibly prolific authors when measuring and creating incentives for researcher productivity.
Authors may choose to work with professional medical writers when writing up their research for publication. We examined the relationship between medical writing support and the quality and timeliness of reporting of the results of randomised controlled trials (RCTs).
Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era.
It is often argued that female researchers publish on average less than male researchers do, but male and female authored papers have an equal impact. In this paper we try to better understand this phenomenon by (i) comparing the share of male and female researchers within different productivity classes, and (ii) by comparing productivity whereas controlling for a series of relevant covariates. The study is based on a disambiguated Swedish author dataset, consisting of 47,000 researchers and their WoS-publications during the period of 2008-2011 with citations until 2015. As the analysis shows, in order to have impact quantity does make a difference for male and female researchers alike-but women are vastly underrepresented in the group of most productive researchers. We discuss and test several possible explanations of this finding, using a data on personal characteristics from several Swedish universities. Gender differences in age, authorship position, and academic rank do explain quite a part of the productivity differences.
Various financial and non-financial conflicts of interests have been shown to influence the reporting of research findings, particularly in clinical medicine. In this study, we examine whether this extends to prognostic instruments designed to assess violence risk. Such instruments have increasingly become a routine part of clinical practice in mental health and criminal justice settings. The present meta-analysis investigated whether an authorship effect exists in the violence risk assessment literature by comparing predictive accuracy outcomes in studies where the individuals who designed these instruments were study authors with independent investigations. A systematic search from 1966 to 2011 was conducted using PsycINFO, EMBASE, MEDLINE, and US National Criminal Justice Reference Service Abstracts to identify predictive validity studies for the nine most commonly used risk assessment tools. Tabular data from 83 studies comprising 104 samples was collected, information on two-thirds of which was received directly from study authors for the review. Random effects subgroup analysis and metaregression were used to explore evidence of an authorship effect. We found a substantial and statistically significant authorship effect. Overall, studies authored by tool designers reported predictive validity findings around two times higher those of investigations reported by independent authors (DOR = 6.22 [95% CI = 4.68-8.26] in designers' studies vs. DOR = 3.08 [95% CI = 2.45-3.88] in independent studies). As there was evidence of an authorship effect, we also examined disclosure rates. None of the 25 studies where tool designers or translators were also study authors published a conflict of interest statement to that effect, despite a number of journals requiring that potential conflicts be disclosed. The field of risk assessment would benefit from routine disclosure and registration of research studies. The extent to which similar conflict of interests exists in those developing risk assessment guidelines and providing expert testimony needs clarification.
Participants in medical forums often reveal personal health information about themselves in their online postings. To feel comfortable revealing sensitive personal health information, some participants may hide their identity by posting anonymously. They can do this by using fake identities, nicknames, or pseudonyms that cannot readily be traced back to them. However, individual writing styles have unique features and it may be possible to determine the true identity of an anonymous user through author attribution analysis. Although there has been previous work on the authorship attribution problem, there has been a dearth of research on automated authorship attribution on medical forums. The focus of the paper is to demonstrate that character-based author attribution works better than word-based methods in medical forums.
- Journal of child psychology and psychiatry, and allied disciplines
- Published over 2 years ago
Numerous style guides, including those issued by the American Psychological and the American Psychiatric Associations, prescribe that writers use only person-first language so that nouns referring to persons (e.g. children) always precede phrases referring to characteristics (e.g. children with typical development). Person-first language is based on the premise that everyone, regardless of whether they have a disability, is a person-first, and therefore everyone should be referred to with person-first language. However, my analysis of scholarly writing suggests that person-first language is used more frequently to refer to children with disabilities than to refer to children without disabilities; person-first language is more frequently used to refer to children with disabilities than adults with disabilities; and person-first language is most frequently used to refer to children with the most stigmatized disabilities. Therefore, the use of person-first language in scholarly writing may actually accentuate stigma rather than attenuate it. Recommendations are forwarded for language use that may reduce stigma.
Case reports are a time-honored, important, integral, and accepted part of the medical literature. Both the Journal of Medical Case Reports and the Case Report section of BioMed Central Research Notes are committed to case report publication, and each have different criteria. Journal of Medical Case Reports was the world’s first international, PubMed-listed medical journal devoted to publishing case reports from all clinical disciplines and was launched in 2007. The Case Report section of BioMed Central Research Notes was created and began publishing case reports in 2012. Between the two of them, thousands of peer-reviewed case reports have now been published with a worldwide audience. Authors now also have Cases Database, a continually updated, freely accessible database of thousands of medical case reports from multiple publishers. This informal editorial outlines the process and mechanics of how and when to write a case report, and provides a brief look into the editorial process behind each of these complementary journals along with the author’s anecdotes in the hope of inspiring all authors (both novice and experienced) to write and continue writing case reports of all specialties. Useful hyperlinks are embedded throughout for easy and quick reference to style guidelines for both journals.