Conspiratorial ideation is the tendency of individuals to believe that events and power relations are secretly manipulated by certain clandestine groups and organisations. Many of these ostensibly explanatory conjectures are non-falsifiable, lacking in evidence or demonstrably false, yet public acceptance remains high. Efforts to convince the general public of the validity of medical and scientific findings can be hampered by such narratives, which can create the impression of doubt or disagreement in areas where the science is well established. Conversely, historical examples of exposed conspiracies do exist and it may be difficult for people to differentiate between reasonable and dubious assertions. In this work, we establish a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy. Parameters for the model are estimated from literature examples of known scandals, and the factors influencing conspiracy success and failure are explored. The model is also used to estimate the likelihood of claims from some commonly-held conspiratorial beliefs; these are namely that the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests. Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants-the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure. The theory presented here might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives, and examining the hypothetical conditions under which sustainable conspiracy might be possible.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 4 years ago
The most widely used task functional magnetic resonance imaging (fMRI) analyses use parametric statistical methods that depend on a variety of assumptions. In this work, we use real resting-state data and a total of 3 million random task group analyses to compute empirical familywise error rates for the fMRI software packages SPM, FSL, and AFNI, as well as a nonparametric permutation method. For a nominal familywise error rate of 5%, the parametric statistical methods are shown to be conservative for voxelwise inference and invalid for clusterwise inference. Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape. By comparison, the nonparametric permutation test is found to produce nominal results for voxelwise as well as clusterwise inference. These findings speak to the need of validating the statistical methods being used in the field of neuroimaging.
To estimate the effect of playing Pokémon GO on the number of steps taken daily up to six weeks after installation of the game.
The energy requirement of species at each trophic level in an ecological pyramid is a function of the number of organisms and their average mass. Regarding human populations, although considerable attention is given to estimating the number of people, much less is given to estimating average mass, despite evidence that average body mass is increasing. We estimate global human biomass, its distribution by region and the proportion of biomass due to overweight and obesity.
Clarity and accuracy of reporting are fundamental to the scientific process. Readability formulas can estimate how difficult a text is to read. Here, in a corpus consisting of 709,577 abstracts published between 1881 and 2015 from 123 scientific journals, we show that the readability of science is steadily decreasing. Our analyses show that this trend is indicative of a growing use of general scientific jargon. These results are concerning for scientists and for the wider public, as they impact both the reproducibility and accessibility of research findings.
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
Reported values in the literature on the number of cells in the body differ by orders of magnitude and are very seldom supported by any measurements or calculations. Here, we integrate the most up-to-date information on the number of human and bacterial cells in the body. We estimate the total number of bacteria in the 70 kg “reference man” to be 3.8·1013. For human cells, we identify the dominant role of the hematopoietic lineage to the total count (≈90%) and revise past estimates to 3.0·1013 human cells. Our analysis also updates the widely-cited 10:1 ratio, showing that the number of bacteria in the body is actually of the same order as the number of human cells, and their total mass is about 0.2 kg.
Indoor dust is a reservoir for commercial consumer product chemicals, including many compounds with known or suspected health effects. However, most dust exposure studies measure few chemicals in small samples. We systematically searched the U.S. indoor dust literature on phthalates, replacement flame retardants (RFRs), perfluoroalkyl substances (PFASs), synthetic fragrances, and environmental phenols and estimated pooled geometric means (GMs) and 95% confidence intervals for 45 chemicals measured in ≥3 data sets. In order to rank and contextualize these results, we used the pooled GMs to calculate residential intake from dust ingestion, inhalation, and dermal uptake from air, and then identified hazard traits from the Safer Consumer Products Candidate Chemical List. Our results indicate that U.S. indoor dust consistently contains chemicals from multiple classes. Phthalates occurred in the highest concentrations, followed by phenols, RFRs, fragrance, and PFASs. Several phthalates and RFRs had the highest residential intakes. We also found that many chemicals in dust share hazard traits such as reproductive and endocrine toxicity. We offer recommendations to maximize comparability of studies and advance indoor exposure science. This information is critical in shaping future exposure and health studies, especially related to cumulative exposures, and in providing evidence for intervention development and public policy.
Women comprise a minority of the Science, Technology, Engineering, Mathematics, and Medicine (STEMM) workforce. Quantifying the gender gap may identify fields that will not reach parity without intervention, reveal underappreciated biases, and inform benchmarks for gender balance among conference speakers, editors, and hiring committees. Using the PubMed and arXiv databases, we estimated the gender of 36 million authors from >100 countries publishing in >6000 journals, covering most STEMM disciplines over the last 15 years, and made a web app allowing easy access to the data (https://lukeholman.github.io/genderGap/). Despite recent progress, the gender gap appears likely to persist for generations, particularly in surgery, computer science, physics, and maths. The gap is especially large in authorship positions associated with seniority, and prestigious journals have fewer women authors. Additionally, we estimate that men are invited by journals to submit papers at approximately double the rate of women. Wealthy countries, notably Japan, Germany, and Switzerland, had fewer women authors than poorer ones. We conclude that the STEMM gender gap will not close without further reforms in education, mentoring, and academic publishing.
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.