SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Sampling

209

Studies examining the relation of information processing speed, as measured by reaction time, with mortality are scarce. We explored these associations in a representative sample of the US population.

Concepts: Sample, Mortality rate, Sample size, Sampling, Demography, Information, Cultural studies, Major

175

Modern metagenomic environmental DNA studies are almost completely reliant on next-generation sequencing, making evaluations of these methods critical. We compare two next-generation sequencing techniques - amplicon and shotgun - on water samples across four of Brazil’s major river floodplain systems (Amazon, Araguaia, Paraná, and Pantanal). Less than 50% of phyla identified via amplicon sequencing were recovered from shotgun sequencing, clearly challenging the dogma that mid-depth shotgun recovers more diversity than amplicon-based approaches. Amplicon sequencing also revealed ~27% more families. Overall the amplicon data were more robust across both biodiversity and community ecology analyses at different taxonomic scales. Our work doubles the sampling size in similar environmental studies, and novelly integrates environmental data (e.g., pH, temperature, nutrients) from each site, revealing divergent correlations depending on which data are used. While myriad variants on NGS techniques and bioinformatic pipelines are available, our results point to core differences that have not been highlighted in any studies to date. Given the low number of taxa identified when coupling shotgun data with clade-based taxonomic algorithms, previous studies that quantified biodiversity using such bioinformatic tools should be viewed cautiously or re-analyzed. Nonetheless, shotgun has complementary advantages that should be weighed when designing projects.

Concepts: DNA, Molecular biology, Sampling, Ecology, Natural environment, Brazil, Revelation, Floodplain

171

BACKGROUND: Lidar height data collected by the Geosciences Laser Altimeter System (GLAS) from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform “shots,” which have been shown to be strongly correlated with aboveground forest biomass. Relationships observed at spatially coincident field plots may be used to model biomass at all GLAS shots, and well-established methods of model-based inference may then be used to estimate biomass and variance for specific spatial domains. However, the spatial pattern of GLAS acquisition is neither random across the surface of the earth nor is it identifiable with any particular systematic design. Undefined sample properties therefore hinder the use of GLAS in global forest sampling. RESULTS: We propose a method of identifying a subset of the GLAS data which can justifiably be treated as a simple random sample in model-based biomass estimation. The relatively uniform spatial distribution and locally arbitrary positioning of the resulting sample is similar to the design used by the US national forest inventory (NFI). We demonstrated model-based estimation using a sample of GLAS data in the US state of California, where our estimate of biomass (211 Mg/hectare) was within the 1.4% standard error of the design-based estimate supplied by the US NFI. The standard error of the GLAS-based estimate was significantly higher than the NFI estimate, although the cost of the GLAS estimate (excluding costs for the satellite itself) was almost nothing, compared to at least US$ 10.5 million for the NFI estimate. CONCLUSIONS: Global application of model-based estimation using GLAS, while demanding significant consolidation of training data, would improve inter-comparability of international biomass estimates by imposing consistent methods and a globally coherent sample frame. The methods presented here constitute a globally extensible approach for generating a simple random sample from the global GLAS dataset, enabling its use in forest inventory activities.

Concepts: Statistics, Variance, Mathematics, Simple random sample, Sample size, Estimation theory, Estimator, Sampling

163

Independent of other cardiovascular (CV) risk factors, increased arterial stiffness has been established as a predictor of morbidity and mortality. The main aim of this study was to investigate the impact of diabetes on arterial stiffness in a representative sample of an urban Brazilian population plus Amerindians.

Concepts: Sample, Mortality rate, Sampling, Myocardial infarction, Diabetes mellitus, The Canon of Medicine, Diabetes, Brazil

151

There is a positive correlation between recall of tobacco-related television news and perceived risks of smoking and thoughts about quitting. The authors used Cision US, Inc., to create a sampling frame (N = 61,027) of local and national television news coverage of tobacco from October 1, 2008, to September 30, 2009, and to draw a nationally representative sample (N = 730) for content analysis. The authors conducted a descriptive study to determine the frequency and proportion of stories containing specified tobacco topics, frames, sources, and action messages, and the valence of the coverage. Valence was generally neutral; 68% of stories took a balanced stance, with 26% having a tenor supportive of tobacco control and 6% opposing tobacco control. The most frequently covered topics included smoking bans (n = 195) and cessation (n = 156). The least covered topics included hookah (n = 1) and menthol (n = 0). The majority of coverage lacked quoting any source (n = 345); government officials (n = 144) were the most quoted sources. Coverage lacked action messages or resources; 29 stories (<4%) included a message about cessation or advocacy, and 8 stories (1%) contained a resource such as a quitline. Television news can be leveraged by health communication professionals to increase awareness of underrepresented topics in tobacco control.

Concepts: Sample size, Sampling, United States, Tobacco, Communication, Hookah, Radio, Frame

141

Despite calls to incorporate population science into neuroimaging research, most studies recruit small, non-representative samples. Here, we examine whether sample composition influences age-related variation in global measurements of gray matter volume, thickness, and surface area. We apply sample weights to structural brain imaging data from a community-based sample of children aged 3-18 (N = 1162) to create a “weighted sample” that approximates the distribution of socioeconomic status, race/ethnicity, and sex in the U.S. Census. We compare associations between age and brain structure in this weighted sample to estimates from the original sample with no sample weights applied (i.e., unweighted). Compared to the unweighted sample, we observe earlier maturation of cortical and sub-cortical structures, and patterns of brain maturation that better reflect known developmental trajectories in the weighted sample. Our empirical demonstration of bias introduced by non-representative sampling in this neuroimaging cohort suggests that sample composition may influence understanding of fundamental neural processes.The influence of sample composition on human neuroimaging results is unknown. Here, the authors weight a large, community-based sample to better reflect the US population and describe how applying these sample weights changes conclusions about age-related variation in brain structure.

Concepts: Scientific method, Brain, Sampling, Structure, Neuroimaging, Sociology, Science, Weight

137

Many randomized controlled trials (RCTs) employ mortality at a given time as a primary outcome. There are at least three common ways to measure 90-day mortality: first, all-location mortality, that is, all-cause mortality within 90 days of randomization at any location. Second, ARDSnet mortality is death in a healthcare facility of greater intensity than the patient was in prior to the hospitalization during which they were randomized. Finally, in-hospital mortality is death prior to discharge from the primary hospitalization of randomization. Data comparing the impact of these different measurements on sample size are lacking. We evaluated the extent to which event rates vary by mortality definition.

Concepts: Cohort study, Experimental design, Epidemiology, Clinical trial, Sampling, Patient, Randomized controlled trial, Measurement

123

The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of randomisation in work published in journals of high impact, and very limited reporting of measures to reduce the risk of bias in publications from leading United Kingdom institutions. Ascertainment of differences between institutions might serve both as a measure of research quality and as a tool for institutional efforts to improve research quality.

Concepts: Scientific method, Sampling, Life, In vivo, Academic publishing, Probability, Randomness, Design of experiments

88

Social media (SM) use is increasing among U.S. young adults, and its association with mental well-being remains unclear. This study assessed the association between SM use and depression in a nationally representative sample of young adults.

Concepts: Sample, Sample size, Sampling, The Association, Sunshine pop

78

Biologists determine experimental effects by perturbing biological entities or units. When done appropriately, independent replication of the entity-intervention pair contributes to the sample size (N) and forms the basis of statistical inference. If the wrong entity-intervention pair is chosen, an experiment cannot address the question of interest. We surveyed a random sample of published animal experiments from 2011 to 2016 where interventions were applied to parents and effects examined in the offspring, as regulatory authorities provide clear guidelines on replication with such designs. We found that only 22% of studies (95% CI = 17%-29%) replicated the correct entity-intervention pair and thus made valid statistical inferences. Nearly half of the studies (46%, 95% CI = 38%-53%) had pseudoreplication while 32% (95% CI = 26%-39%) provided insufficient information to make a judgement. Pseudoreplication artificially inflates the sample size, and thus the evidence for a scientific claim, resulting in false positives. We argue that distinguishing between biological units, experimental units, and observational units clarifies where replication should occur, describe the criteria for genuine replication, and provide concrete examples of in vitro, ex vivo, and in vivo experimental designs.

Concepts: Experimental design, Statistics, Sampling, In vivo, Experiment, In vitro, Design of experiments, Statistical inference