Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.
Disagreement and conflict are a fact of social life. However, negative interactions are rarely explicitly declared and recorded and this makes them hard for scientists to study. In an attempt to understand the structural and temporal features of negative interactions in the community, we use complex network methods to analyze patterns in the timing and configuration of reverts of article edits to Wikipedia. We investigate how often and how fast pairs of reverts occur compared to a null model in order to control for patterns that are natural to the content production or are due to the internal rules of Wikipedia. Our results suggest that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. We further relate these interactions to the status of the involved editors. Even though the individual reverts might not necessarily be negative social interactions, our analysis points to the existence of certain patterns of negative social dynamics within the community of editors. Some of these patterns have not been previously explored and carry implications for the knowledge collection practice conducted on Wikipedia. Our method can be applied to other large-scale temporal collaboration networks to identify the existence of negative social interactions and other social processes.
The Internet has dramatically expanded citizens' access to and ability to engage with political information. On many websites, any user can contribute and edit “crowd-sourced” information about important political figures. One of the most prominent examples of crowd-sourced information on the Internet is Wikipedia, a free and open encyclopedia created and edited entirely by users, and one of the world’s most accessed websites. While previous studies of crowd-sourced information platforms have found them to be accurate, few have considered biases in what kinds of information are included. We report the results of four randomized field experiments that sought to explore what biases exist in the political articles of this collaborative website. By randomly assigning factually true but either positive or negative and cited or uncited information to the Wikipedia pages of U.S. senators, we uncover substantial evidence of an editorial bias toward positivity on Wikipedia: Negative facts are 36% more likely to be removed by Wikipedia editors than positive facts within 12 hours and 29% more likely within 3 days. Although citations substantially increase an edit’s survival time, the editorial bias toward positivity is not eliminated by inclusion of a citation. We replicate this study on the Wikipedia pages of deceased as well as recently retired but living senators and find no evidence of an editorial bias in either. Our results demonstrate that crowd-sourced information is subject to an editorial bias that favors the politically active.
eLife deputy editor recounts some of her personal experiences as a senior female academic in a male-dominated environment.
PLOS Medicine’s Chief Editor looks back at some highlights in the way the journal has sought to constructively disrupt medical publishing over its history. Please see later in the article for the Editors' Summary.
The Internet has provided us with great opportunities for large scale collaborative public good projects. Wikipedia is a predominant example of such projects where conflicts emerge and get resolved through bottom-up mechanisms leading to the emergence of the largest encyclopedia in human history. Disaccord arises whenever editors with different opinions try to produce an article reflecting a consensual view. The debates are mainly heated by editors with extreme views. Using a model of common value production, we show that the consensus can only be reached if groups with extreme views can actively take part in the discussion and if their views are also represented in the common outcome, at least temporarily. We show that banning problematic editors mostly hinders the consensus as it delays discussion and thus the whole consensus building process. To validate the model, relevant quantities are measured both in simulations and Wikipedia, which show satisfactory agreement. We also consider the role of direct communication between editors both in the model and in Wikipedia data (by analyzing the Wikipedia talk pages). While the model suggests that in certain conditions there is an optimal rate of “talking” vs “editing”, it correctly predicts that in the current settings of Wikipedia, more activity in talk pages is associated with more controversy.
MOTIVATION: Data collection in spreadsheets is ubiquitous, but current solutions lack support for collaborative semantic annotation that would promote shared and interdisciplinary annotation practices, supporting geographically distributed players. RESULTS: OntoMaton is an open source solution that brings ontology look-up and tagging capabilities into a cloud-based collaborative editing environment, harnessing Google Spreadsheets and the NCBO Bioportal Web services. It is a general purpose, format-agnostic tool that may serve as a component of the ISA software suite. OntoMaton can also be used to assist the ontology development process. AVAILABILITY: OntoMaton is freely available from Google widgets under the CPAL open source license; documentation and examples at: https://github.com/ISA-tools/OntoMaton CONTACT: firstname.lastname@example.org.
Phylogenetic estimates from published studies can be archived using general platforms like Dryad (Vision, 2010) or TreeBASE (Sanderson et al., 1994). Such services fulfill a crucial role in ensuring transparency and reproducibility in phylogenetic research. However, digital tree data files often require some editing (e.g. rerooting) to improve the accuracy and reusability of the phylogenetic statements. Furthermore, establishing the mapping between tip labels used in a tree and taxa in a single common taxonomy dramatically improves the ability of other researchers to reuse phylogenetic estimates. Because the process of curating a published phylogenetic estimate is not error-free, retaining a full record of the provenance of edits to a tree is crucial for openness, allowing editors to receive credit for their work, and making errors introduced during curation easier to correct.
- BJOG : an international journal of obstetrics and gynaecology
- Published over 2 years ago
Deputy Editor-in-Chief, Dr John Thorp, discusses his top articles from this issue in an audio podcast available at: https://soundcloud.com/bjog/february-editors-choice-2016.
We trace the legacies of filmed patient narratives that were edited and screened to encourage engagement with a participatory quality improvement project in an acute hospital setting in England. Using Gabriel’s theory of “narrative contract,” we examine the initial success of the films in establishing common grounds for participatory project and later, and more varied, interpretations of the films. Over time, the films were interpreted by staff as either useful sources of learning by critical reflection, dubious (invalid or unreliable) representations of patient experience, or as “closed” items available as auditable evidence of completed quality improvement work. We find these interpretations of the films to be shaped by the effect of social distance, the differential outcomes of project work, and changing organizational agendas. We consider the wider conditions of patient narrative as a form of quality improvement knowledge with immediate potency and fragile or fluid legitimacy over time.