Discover the most talked about and latest scientific content & concepts.

Concept: Relative risk


A geographically-resolved, multi-level Bayesian model is used to analyze the data presented in the U.S. Police-Shooting Database (USPSD) in order to investigate the extent of racial bias in the shooting of American civilians by police officers in recent years. In contrast to previous work that relied on the FBI’s Supplemental Homicide Reports that were constructed from self-reported cases of police-involved homicide, this data set is less likely to be biased by police reporting practices. County-specific relative risk outcomes of being shot by police are estimated as a function of the interaction of: 1) whether suspects/civilians were armed or unarmed, and 2) the race/ethnicity of the suspects/civilians. The results provide evidence of a significant bias in the killing of unarmed black Americans relative to unarmed white Americans, in that the probability of being {black, unarmed, and shot by police} is about 3.49 times the probability of being {white, unarmed, and shot by police} on average. Furthermore, the results of multi-level modeling show that there exists significant heterogeneity across counties in the extent of racial bias in police shootings, with some counties showing relative risk ratios of 20 to 1 or more. Finally, analysis of police shooting data as a function of county-level predictors suggests that racial bias in police shootings is most likely to emerge in police departments in larger metropolitan counties with low median incomes and a sizable portion of black residents, especially when there is high financial inequality in that county. There is no relationship between county-level racial bias in police shootings and crime rates (even race-specific crime rates), meaning that the racial bias observed in police shootings in this data set is not explainable as a response to local-level crime rates.

Concepts: Scientific method, United States, Relative risk, Crime, White American, Police, Bayes factor, Constable


The evidence base for the health effects of spice consumption is insufficient, with only one large population-based study and no reports from Europe or North America. Our objective was to analyze the association between consumption of hot red chili peppers and mortality, using a population-based prospective cohort from the National Health and Nutritional Examination Survey (NHANES) III, a representative sample of US noninstitutionalized adults, in which participants were surveyed from 1988 to 1994. The frequency of hot red chili pepper consumption was measured in 16,179 participants at least 18 years of age. Total and cause-specific mortality were the main outcome measures. During 273,877 person-years of follow-up (median 18.9 years), a total of 4,946 deaths were observed. Total mortality for participants who consumed hot red chili peppers was 21.6% compared to 33.6% for those who did not (absolute risk reduction of 12%; relative risk of 0.64). Adjusted for demographic, lifestyle, and clinical characteristics, the hazard ratio was 0.87 (P = 0.01; 95% Confidence Interval 0.77, 0.97). Consumption of hot red chili peppers was associated with a 13% reduction in the instantaneous hazard of death. Similar, but statistically nonsignificant trends were seen for deaths from vascular disease, but not from other causes. In this large population-based prospective study, the consumption of hot red chili pepper was associated with reduced mortality. Hot red chili peppers may be a beneficial component of the diet.

Concepts: Cohort study, Epidemiology, Medical statistics, Relative risk, Chili pepper, Black pepper, Cayenne pepper, Red Hot Chili Peppers


Background Targeted temperature management is recommended for comatose adults and children after out-of-hospital cardiac arrest; however, data on temperature management after in-hospital cardiac arrest are limited. Methods In a trial conducted at 37 children’s hospitals, we compared two temperature interventions in children who had had in-hospital cardiac arrest. Within 6 hours after the return of circulation, comatose children older than 48 hours and younger than 18 years of age were randomly assigned to therapeutic hypothermia (target temperature, 33.0°C) or therapeutic normothermia (target temperature, 36.8°C). The primary efficacy outcome, survival at 12 months after cardiac arrest with a score of 70 or higher on the Vineland Adaptive Behavior Scales, second edition (VABS-II, on which scores range from 20 to 160, with higher scores indicating better function), was evaluated among patients who had had a VABS-II score of at least 70 before the cardiac arrest. Results The trial was terminated because of futility after 329 patients had undergone randomization. Among the 257 patients who had a VABS-II score of at least 70 before cardiac arrest and who could be evaluated, the rate of the primary efficacy outcome did not differ significantly between the hypothermia group and the normothermia group (36% [48 of 133 patients] and 39% [48 of 124 patients], respectively; relative risk, 0.92; 95% confidence interval [CI], 0.67 to 1.27; P=0.63). Among 317 patients who could be evaluated for change in neurobehavioral function, the change in VABS-II score from baseline to 12 months did not differ significantly between the groups (P=0.70). Among 327 patients who could be evaluated for 1-year survival, the rate of 1-year survival did not differ significantly between the hypothermia group and the normothermia group (49% [81 of 166 patients] and 46% [74 of 161 patients], respectively; relative risk, 1.07; 95% CI, 0.85 to 1.34; P=0.56). The incidences of blood-product use, infection, and serious adverse events, as well as 28-day mortality, did not differ significantly between groups. Conclusions Among comatose children who survived in-hospital cardiac arrest, therapeutic hypothermia, as compared with therapeutic normothermia, did not confer a significant benefit in survival with a favorable functional outcome at 1 year. (Funded by the National Heart, Lung, and Blood Institute; THAPCA-IH number, NCT00880087 .).

Concepts: Heart, Relative risk, Cardiac arrest, Asystole, Therapeutic hypothermia, Harshad number, Hebrew numerals, Drowning


Background Chronic lymphocytic leukemia (CLL) primarily affects older persons who often have coexisting conditions in addition to disease-related immunosuppression and myelosuppression. We conducted an international, open-label, randomized phase 3 trial to compare two oral agents, ibrutinib and chlorambucil, in previously untreated older patients with CLL or small lymphocytic lymphoma. Methods We randomly assigned 269 previously untreated patients who were 65 years of age or older and had CLL or small lymphocytic lymphoma to receive ibrutinib or chlorambucil. The primary end point was progression-free survival as assessed by an independent review committee. Results The median age of the patients was 73 years. During a median follow-up period of 18.4 months, ibrutinib resulted in significantly longer progression-free survival than did chlorambucil (median, not reached vs. 18.9 months), with a risk of progression or death that was 84% lower with ibrutinib than that with chlorambucil (hazard ratio, 0.16; P<0.001). Ibrutinib significantly prolonged overall survival; the estimated survival rate at 24 months was 98% with ibrutinib versus 85% with chlorambucil, with a relative risk of death that was 84% lower in the ibrutinib group than in the chlorambucil group (hazard ratio, 0.16; P=0.001). The overall response rate was higher with ibrutinib than with chlorambucil (86% vs. 35%, P<0.001). The rates of sustained increases from baseline values in the hemoglobin and platelet levels were higher with ibrutinib. Adverse events of any grade that occurred in at least 20% of the patients receiving ibrutinib included diarrhea, fatigue, cough, and nausea; adverse events occurring in at least 20% of those receiving chlorambucil included nausea, fatigue, neutropenia, anemia, and vomiting. In the ibrutinib group, four patients had a grade 3 hemorrhage and one had a grade 4 hemorrhage. A total of 87% of the patients in the ibrutinib group are continuing to take ibrutinib. Conclusions Ibrutinib was superior to chlorambucil in previously untreated patients with CLL or small lymphocytic lymphoma, as assessed by progression-free survival, overall survival, response rate, and improvement in hematologic variables. (Funded by Pharmacyclics and others; RESONATE-2 number, NCT01722487 .).

Concepts: Cancer, Chemotherapy, Relative risk, Hematology, Leukemia, Lymphoma, Blood disorders, Small lymphocytic lymphoma


OBJECTIVE: Low-carbohydrate diets and their combination with high-protein diets have been gaining widespread popularity to control weight. In addition to weight loss, they may have favorable short-term effects on the risk factors of cardiovascular disease (CVD). Our objective was to elucidate their long-term effects on mortality and CVD incidence. DATA SOURCES: MEDLINE, EMBASE, ISI Web of Science, Cochrane Library, and for relevant articles published as of September 2012. Cohort studies of at least one year’s follow-up period were included. REVIEW METHODS: Identified articles were systematically reviewed and those with pertinent data were selected for meta-analysis. Pooled risk ratios (RRs) with 95% confidence intervals (CIs) for all-cause mortality, CVD mortality and CVD incidence were calculated using the random-effects model with inverse-variance weighting. RESULTS: We included 17 studies for a systematic review, followed by a meta-analysis using pertinent data. Of the 272,216 people in 4 cohort studies using the low-carbohydrate score, 15,981 (5.9%) cases of death from all-cause were reported. The risk of all-cause mortality among those with high low-carbohydrate score was significantly elevated: the pooled RR (95% CI) was 1.31 (1.07-1.59). A total of 3,214 (1.3%) cases of CVD death among 249,272 subjects in 3 cohort studies and 5,081 (2.3%) incident CVD cases among 220,691 people in different 4 cohort studies were reported. The risks of CVD mortality and incidence were not statistically increased: the pooled RRs (95% CIs) were 1.10 (0.98-1.24) and 0.98 (0.78-1.24), respectively. Analyses using low-carbohydrate/high-protein score yielded similar results. CONCLUSION: Low-carbohydrate diets were associated with a significantly higher risk of all-cause mortality and they were not significantly associated with a risk of CVD mortality and incidence. However, this analysis is based on limited observational studies and large-scale trials on the complex interactions between low-carbohydrate diets and long-term outcomes are needed.

Concepts: Scientific method, Epidemiology, Clinical trial, Medical statistics, Risk, Evidence-based medicine, Actuarial science, Relative risk


Background The frequency of planned out-of-hospital birth in the United States has increased in recent years. The value of studies assessing the perinatal risks of planned out-of-hospital birth versus hospital birth has been limited by cases in which transfer to a hospital is required and a birth that was initially planned as an out-of-hospital birth is misclassified as a hospital birth. Methods We performed a population-based, retrospective cohort study of all births that occurred in Oregon during 2012 and 2013 using data from newly revised Oregon birth certificates that allowed for the disaggregation of hospital births into the categories of planned in-hospital births and planned out-of-hospital births that took place in the hospital after a woman’s intrapartum transfer to the hospital. We assessed perinatal morbidity and mortality, maternal morbidity, and obstetrical procedures according to the planned birth setting (out of hospital vs. hospital). Results Planned out-of-hospital birth was associated with a higher rate of perinatal death than was planned in-hospital birth (3.9 vs. 1.8 deaths per 1000 deliveries, P=0.003; odds ratio after adjustment for maternal characteristics and medical conditions, 2.43; 95% confidence interval [CI], 1.37 to 4.30; adjusted risk difference, 1.52 deaths per 1000 births; 95% CI, 0.51 to 2.54). The odds for neonatal seizure were higher and the odds for admission to a neonatal intensive care unit lower with planned out-of-hospital births than with planned in-hospital birth. Planned out-of-hospital birth was also strongly associated with unassisted vaginal delivery (93.8%, vs. 71.9% with planned in-hospital births; P<0.001) and with decreased odds for obstetrical procedures. Conclusions Perinatal mortality was higher with planned out-of-hospital birth than with planned in-hospital birth, but the absolute risk of death was low in both settings. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.).

Concepts: Cohort study, Childbirth, Epidemiology, Death, Mortality rate, Relative risk, Eunice Kennedy Shriver, National Institute of Child Health and Human Development


Background:Tibiotalocalcaneal (TTC) arthrodesis using a nail has been shown to be an effective salvage technique; however, there is a risk of major amputation. A better understanding of the relative risk of amputation after TTC fusion and the factors that influence this could help the preoperative consultation and guide discussion on the economics of limb salvage.Methods:One hundred seventy-nine limbs were treated with TTC fusion with an intramedullary nail. A comprehensive chart and radiographic review was pulled from our intramedullary nail database. Patients were divided into those who went on to eventual amputation and those with successful salvage of their limb. Variables from the database were used to build a statistical model with a biostatistician. Final results were presented, and a formula to determine probability of amputation was created.Results:There were 21 limbs that were eventually treated with major amputation. This represents an overall salvage rate of 88.2% (158/179 patients). Age was a factor in amputation risk, and the highest risk factor for amputation was diabetes with an odds ratio of 7.01 and 95% confidence, P = .0019. The odds of amputation were 6.2 times and 3 times greater for patients undergoing revisions and those with preoperative ulcers, respectively. The probability of amputation could be found preoperatively by using the derived equation: e(x)/(1 + e(x)) where x is a factor of age, diabetes, revision, and ulceration.Conclusion:TTC arthrodesis with a retrograde intramedullary nail has a high rate of limb salvage across a wide range of indications and medical comorbidities. In this patient cohort, diabetes was the most notable risk for amputation, followed by revision surgery, preoperative ulceration, and age. A model has been built to help predict the risk of amputation.Level of Evidence:Level II, prognostic.

Concepts: Epidemiology, Statistics, Medical statistics, Risk, Surgery, Relative risk, Odds ratio, Biostatistics


Objective To test the hypotheses that physical activity in midlife is not associated with a reduced risk of dementia and that the preclinical phase of dementia is characterised by a decline in physical activity.Design Prospective cohort study with a mean follow-up of 27 years.Setting Civil service departments in London (Whitehall II study).Participants 10 308 participants aged 35-55 years at study inception (1985-88). Exposures included time spent in mild, moderate to vigorous, and total physical activity assessed seven times between 1985 and 2013 and categorised as “recommended” if duration of moderate to vigorous physical activity was 2.5 hours/week or more.Main outcome measures A battery of cognitive tests was administered up to four times from 1997 to 2013, and incident dementia cases (n=329) were identified through linkage to hospital, mental health services, and mortality registers until 2015.Results Mixed effects models showed no association between physical activity and subsequent 15 year cognitive decline. Similarly, Cox regression showed no association between physical activity and risk of dementia over an average 27 year follow-up (hazard ratio in the “recommended” physical activity category 1.00, 95% confidence interval 0.80 to 1.24). For trajectories of hours/week of total, mild, and moderate to vigorous physical activity in people with dementia compared with those without dementia (all others), no differences were observed between 28 and 10 years before diagnosis of dementia. However, physical activity in people with dementia began to decline up to nine years before diagnosis (difference in moderate to vigorous physical activity -0.39 hours/week; P=0.05), and the difference became more pronounced (-1.03 hours/week; P=0.005) at diagnosis.Conclusion This study found no evidence of a neuroprotective effect of physical activity. Previous findings showing a lower risk of dementia in physically active people may be attributable to reverse causation-that is, due to a decline in physical activity levels in the preclinical phase of dementia.

Concepts: Cohort study, Cohort, Epidemiology, Proportional hazards models, Actuarial science, Relative risk, Physical exercise, Cognition


Objective To determine the attributable risk of community acquired pneumonia on incidence of heart failure throughout the age range of affected patients and severity of the infection.Design Cohort study.Setting Six hospitals and seven emergency departments in Edmonton, Alberta, Canada, 2000-02.Participants 4988 adults with community acquired pneumonia and no history of heart failure were prospectively recruited and matched on age, sex, and setting of treatment (inpatient or outpatient) with up to five adults without pneumonia (controls) or prevalent heart failure (n=23 060).Main outcome measures Risk of hospital admission for incident heart failure or a combined endpoint of heart failure or death up to 2012, evaluated using multivariable Cox proportional hazards analyses.Results The average age of participants was 55 years, 2649 (53.1%) were men, and 63.4% were managed as outpatients. Over a median of 9.9 years (interquartile range 5.9-10.6), 11.9% (n=592) of patients with pneumonia had incident heart failure compared with 7.4% (n=1712) of controls (adjusted hazard ratio 1.61, 95% confidence interval 1.44 to 1.81). Patients with pneumonia aged 65 or less had the lowest absolute increase (but greatest relative risk) of heart failure compared with controls (4.8% v 2.2%; adjusted hazard ratio 1.98, 95% confidence interval 1.5 to 2.53), whereas patients with pneumonia aged more than 65 years had the highest absolute increase (but lowest relative risk) of heart failure (24.8% v 18.9%; adjusted hazard ratio 1.55, 1.36 to 1.77). Results were consistent in the short term (90 days) and intermediate term (one year) and whether patients were treated in hospital or as outpatients.Conclusion Our results show that community acquired pneumonia substantially increases the risk of heart failure across the age and severity range of cases. This should be considered when formulating post-discharge care plans and preventive strategies, and assessing downstream episodes of dyspnoea.

Concepts: Epidemiology, Survival analysis, Hospital, Medical statistics, Risk, Relative risk, Incidence, Hazard ratio


Objectives To evaluate the risk of all cause mortality associated with initiating compared with not initiating benzodiazepines in adults, and to address potential treatment barriers and confounding related to the use of a non-active comparator group.Design Retrospective cohort study.Setting Large de-identified US commercial healthcare database (Optum Clinformatics Datamart).Participants 1:1 high dimensional propensity score matched cohort of benzodiazepine initiators, and randomly selected benzodiazepine non-initiators with a medical visit within 14 days of the start of benzodiazepine treatment (n=1 252 988), between July 2004 and December 2013. To address treatment barriers and confounding, patients were required to have filled one or more prescriptions for any medication in the 90 days and 91-180 days before the index date (ie, the date of starting benzodiazepine treatment for initiators and the date of the selected medical visit for benzodiazepine non-initiators) and the high dimensional propensity score was estimated on the basis of more than 300 covariates.Main outcome measure All cause mortality, determined by linkage with the Social Security Administration Death Master File.Results Over a six month follow-up period, 5061 and 4691 deaths occurred among high dimensional propensity score matched benzodiazepine initiators versus non-initiators (9.3 v 9.4 events per 1000 person years; hazard ratio 1.00, 95% confidence interval 0.96 to 1.04). A 4% (95% confidence interval 1% to 8%) to 9% (2% to 7%) increase in mortality risk was observed associated with the start of benzodiazepine treatment for follow-ups of 12 and 48 months and in subgroups of younger patients and patients initiating short acting agents. In secondary analyses comparing 1:1 high dimensional propensity score matched patients initiating benzodiazepines with an active comparator, ie, patients starting treatment with selective serotonin reuptake inhibitor antidepressants, benzodiazepine use was associated with a 9% (95% confidence interval 3% to 16%) increased risk.Conclusions This large population based cohort study suggests either no increase or at most a minor increase in risk of all cause mortality associated with benzodiazepine initiation. If a detrimental effect exists, it is likely to be much smaller than previously stated and to have uncertain clinical relevance. Residual confounding likely explains at least part of the small increase in mortality risk observed in selected analyses.

Concepts: Experimental design, Epidemiology, Actuarial science, Relative risk, Selective serotonin reuptake inhibitor, Statistical terminology, Benzodiazepine, Reuptake inhibitor