Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
Two rival theories of how humans recognize faces exist: (i) recognition is innate, relying on specialized neocortical circuitry, and (ii) recognition is a learned expertise, relying on general object recognition pathways. Here, we explore whether animals without a neocortex, can learn to recognize human faces. Human facial recognition has previously been demonstrated for birds, however they are now known to possess neocortex-like structures. Also, with much of the work done in domesticated pigeons, one cannot rule out the possibility that they have developed adaptations for human face recognition. Fish do not appear to possess neocortex-like cells, and given their lack of direct exposure to humans, are unlikely to have evolved any specialized capabilities for human facial recognition. Using a two-alternative forced-choice procedure, we show that archerfish (Toxotes chatareus) can learn to discriminate a large number of human face images (Experiment 1, 44 faces), even after controlling for colour, head-shape and brightness (Experiment 2, 18 faces). This study not only demonstrates that archerfish have impressive pattern discrimination abilities, but also provides evidence that a vertebrate lacking a neocortex and without an evolutionary prerogative to discriminate human faces, can nonetheless do so to a high degree of accuracy.
Episodes of Palaeolithic cannibalism have frequently been defined as ‘nutritional’ in nature, but with little empirical evidence to assess their dietary significance. This paper presents a nutritional template that offers a proxy calorie value for the human body. When applied to the Palaeolithic record, the template provides a framework for assessing the dietary value of prehistoric cannibalistic episodes compared to the faunal record. Results show that humans have a comparable nutritional value to those faunal species that match our typical body weight, but significantly lower than a range of fauna often found in association with anthropogenically modified hominin remains. This could suggest that the motivations behind hominin anthropophagy may not have been purely nutritionally motivated. It is proposed here that the comparatively low nutritional value of hominin cannibalism episodes support more socially or culturally driven narratives in the interpretation of Palaeolithic cannibalism.
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
Society’s techno-social systems are becoming ever faster and more computer-orientated. However, far from simply generating faster versions of existing behaviour, we show that this speed-up can generate a new behavioural regime as humans lose the ability to intervene in real time. Analyzing millisecond-scale data for the world’s largest and most powerful techno-social system, the global financial market, we uncover an abrupt transition to a new all-machine phase characterized by large numbers of subsecond extreme events. The proliferation of these subsecond events shows an intriguing correlation with the onset of the system-wide financial collapse in 2008. Our findings are consistent with an emerging ecology of competitive machines featuring ‘crowds’ of predatory algorithms, and highlight the need for a new scientific theory of subsecond financial phenomena.
Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a “four-headed beast”-it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the “genomical” challenges of the next decade.
To investigate whether language used in science abstracts can skew towards the use of strikingly positive and negative words over time.
Industry sponsors' financial interests might bias the conclusions of scientific research. We examined whether financial industry funding or the disclosure of potential conflicts of interest influenced the results of published systematic reviews (SRs) conducted in the field of sugar-sweetened beverages (SSBs) and weight gain or obesity.
Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible-in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.
Clarity and accuracy of reporting are fundamental to the scientific process. Readability formulas can estimate how difficult a text is to read. Here, in a corpus consisting of 709,577 abstracts published between 1881 and 2015 from 123 scientific journals, we show that the readability of science is steadily decreasing. Our analyses show that this trend is indicative of a growing use of general scientific jargon. These results are concerning for scientists and for the wider public, as they impact both the reproducibility and accessibility of research findings.