# SciCombinator

Discover the most talked about and latest scientific content & concepts.

### Concept: Statistics

#### 256

De Winter and Happee [1] examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that “selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective” (p.4).

Concepts: Scientific method, Statistics, Mathematics

#### 243

In the USA, the relationship between the legal availability of guns and the firearm-related homicide rate has been debated. It has been argued that unrestricted gun availability promotes the occurrence of firearm-induced homicides. It has also been pointed out that gun possession can protect potential victims when attacked. This paper provides a first mathematical analysis of this tradeoff, with the goal to steer the debate towards arguing about assumptions, statistics, and scientific methods. The model is based on a set of clearly defined assumptions, which are supported by available statistical data, and is formulated axiomatically such that results do not depend on arbitrary mathematical expressions. According to this framework, two alternative scenarios can minimize the gun-related homicide rate: a ban of private firearms possession, or a policy allowing the general population to carry guns. Importantly, the model identifies the crucial parameters that determine which policy minimizes the death rate, and thus serves as a guide for the design of future epidemiological studies. The parameters that need to be measured include the fraction of offenders that illegally possess a gun, the degree of protection provided by gun ownership, and the fraction of the population who take up their right to own a gun and carry it when attacked. Limited data available in the literature were used to demonstrate how the model can be parameterized, and this preliminary analysis suggests that a ban of private firearm possession, or possibly a partial reduction in gun availability, might lower the rate of firearm-induced homicides. This, however, should not be seen as a policy recommendation, due to the limited data available to inform and parameterize the model. However, the model clearly defines what needs to be measured, and provides a basis for a scientific discussion about assumptions and data.

#### 232

Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.

#### 224

Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

#### 223

Received academic wisdom holds that human judgment is characterized by unrealistic optimism, the tendency to underestimate the likelihood of negative events and overestimate the likelihood of positive events. With recent questions being raised over the degree to which the majority of this research genuinely demonstrates optimism, attention to possible mechanisms generating such a bias becomes ever more important. New studies have now claimed that unrealistic optimism emerges as a result of biased belief updating with distinctive neural correlates in the brain. On a behavioral level, these studies suggest that, for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook). However, using task analyses, simulations, and experiments we demonstrate that this pattern of results is a statistical artifact. In contrast with previous work, we examined participants' use of new information with reference to the normative, Bayesian standard. Simulations reveal the fundamental difficulties that would need to be overcome by any robust test of optimistic updating. No such test presently exists, so that the best one can presently do is perform analyses with a number of techniques, all of which have important weaknesses. Applying these analyses to five experiments shows no evidence of optimistic updating. These results clarify the difficulties involved in studying human ‘bias’ and cast additional doubt over the status of optimism as a fundamental characteristic of healthy cognition.

#### 222

What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.

#### 221

Sole-source business models for genetic testing can create private databases containing information vital to interpreting the clinical significance of human genetic variations. But incomplete access to those databases threatens to impede the clinical interpretation of genomic medicine. National health systems and insurers, regulators, researchers, providers and patients all have a strong interest in ensuring broad access to information about the clinical significance of variants discovered through genetic testing. They can create incentives for sharing data and interpretive algorithms in several ways, including: promoting voluntary sharing; requiring laboratories to share as a condition of payment for or regulatory approval of laboratory services; establishing - and compelling participation in - resources that capture the information needed to interpret the data independent of company policies; and paying for sharing and interpretation in addition to paying for the test itself. US policies have failed to address the data-sharing issue. The entry of new and established firms into the European genetic testing market presents an opportunity to correct this failure.European Journal of Human Genetics advance online publication, 14 November 2012; doi:10.1038/ejhg.2012.217.

#### 215

Recent studies suggest that learning and using a second language (L2) can affect brain structure, including the structure of white matter (WM) tracts. This observation comes from research looking at early and older bilingual individuals who have been using both their first and second languages on an everyday basis for many years. This study investigated whether young, highly immersed late bilinguals would also show structural effects in the WM that can be attributed to everyday L2 use, irrespective of critical periods or the length of L2 learning. Our Tract-Based Spatial Statistics analysis revealed higher fractional anisotropy values for bilinguals vs. monolinguals in several WM tracts that have been linked to language processing and in a pattern closely resembling the results reported for older and early bilinguals. We propose that learning and actively using an L2 after childhood can have rapid dynamic effects on WM structure, which in turn may assist in preserving WM integrity in older age.

#### 211

Public confidence in genetically modified (GM) crop studies is tenuous at best in many countries, including those of the European Union in particular. A lack of information about the effects of ties between academic research and industry might stretch this confidence to the breaking point. We therefore performed an analysis on a large set of research articles (n = 672) focusing on the efficacy or durability of GM Bt crops and ties between the researchers carrying out these studies and the GM crop industry. We found that ties between researchers and the GM crop industry were common, with 40% of the articles considered displaying conflicts of interest (COI). In particular, we found that, compared to the absence of COI, the presence of a COI was associated with a 50% higher frequency of outcomes favorable to the interests of the GM crop company. Using our large dataset, we were able to propose possible direct and indirect mechanisms behind this statistical association. They might notably include changes of authorship or funding statements after the results of a study have been obtained and a choice in the topics studied driven by industrial priorities.