Concept: Receiver operating characteristic
CCL11, a protein previously associated with age-associated cognitive decline, is observed to be increased in the brain and cerebrospinal fluid (CSF) in chronic traumatic encephalopathy (CTE) compared to Alzheimer’s disease (AD). Using a cohort of 23 deceased American football players with neuropathologically verified CTE, 50 subjects with neuropathologically diagnosed AD, and 18 non-athlete controls, CCL11 was measured with ELISA in the dorsolateral frontal cortex (DLFC) and CSF. CCL11 levels were significantly increased in the DLFC in subjects with CTE (fold change = 1.234, p < 0.050) compared to non-athlete controls and AD subjects with out a history of head trauma. This increase was also seen to correlate with years of exposure to American football (β = 0.426, p = 0.048) independent of age (β = -0.046, p = 0.824). Preliminary analyses of a subset of subjects with available post-mortem CSF showed a trend for increased CCL11 among individuals with CTE (p = 0.069) mirroring the increase in the DLFC. Furthermore, an association between CSF CCL11 levels and the number of years exposed to football (β = 0.685, p = 0.040) was observed independent of age (β = -0.103, p = 0.716). Finally, a receiver operating characteristic (ROC) curve analysis demonstrated CSF CCL11 accurately distinguished CTE subjects from non-athlete controls and AD subjects (AUC = 0.839, 95% CI 0.62-1.058, p = 0.028). Overall, the current findings provide preliminary evidence that CCL11 may be a novel target for future CTE biomarker studies.
Steps/day translation of the moderate-to-vigorous physical activity guideline for children and adolescents
- The international journal of behavioral nutrition and physical activity
- Published over 4 years ago
BACKGROUND: An evidence-based steps/day translation of U.S. federal guidelines for youth to engage in >=60 minutes/day of moderate-to-vigorous physical activity (MVPA) would help health researchers, practitioners, and lay professionals charged with increasing youth’s physical activity (PA). The purpose of this study was to determine the number of free-living steps/day (both raw and adjusted to a pedometer scale) that correctly classified children (6–11 years) and adolescents (12–17 years) as meeting the 60-minute MVPA guideline using the 2005–2006 National Health and Nutrition Examination Survey (NHANES) accelerometer data, and to evaluate the 12,000 steps/day recommendation recently adopted by the President’s Challenge Physical Activity and Fitness Awards Program. METHODS: Analyses were conducted among children (n = 915) and adolescents (n = 1,302) in 2011 and 2012. Receiver Operating Characteristic (ROC) curve plots and classification statistics revealed candidate steps/day cut points that discriminated meeting/not meeting the MVPA threshold by age group, gender and different accelerometer activity cut points. The Evenson and two Freedson age-specific (3 and 4 METs) cut points were used to define minimum MVPA, and optimal steps/day were examined for raw steps and adjusted to a pedometer-scale to facilitate translation to lay populations. RESULTS: For boys and girls (6–11 years) with >= 60 minutes/day of MVPA, a range of 11,500–13,500 uncensored steps/day for children was the optimal range that balanced classification errors. For adolescent boys and girls (12–17) with >=60 minutes/day of MVPA, 11,500–14,000 uncensored steps/day was optimal. Translation to a pedometer-scaling reduced these minimum values by 2,500 step/day to 9,000 steps/day. Area under the curve was >=84% in all analyses. CONCLUSIONS: No single study has definitively identified a precise and unyielding steps/day value for youth. Considering the other evidence to date, we propose a reasonable ‘rule of thumb’ value of >= 11,500 accelerometer-determined steps/day for both children and adolescents (and both genders), accepting that more is better. For practical applications, 9,000 steps/day appears to be a more pedometer-friendly value.
BACKGROUND: The symptom of tongue deviation is observed in a stroke or transient ischemic attack. Nevertheless, there is much room for the interpretation of the tongue deviation test. The crucial factor is the lack of an effective quantification method of tongue deviation. If we can quantify the features of the tongue deviation and scientifically verify the relationship between the deviation angle and a stroke, the information provided by the tongue will be helpful in recognizing a warning of a stroke. METHODS: In this study, a quantification method of the tongue deviation angle was proposed for the first time to characterize stroke patients. We captured the tongue images of stroke patients (15 males and 10 females, ranging between 55 and 82 years of age); transient ischemic attack (TIA) patients (16 males and 9 females, ranging between 53 and 79 years of age); and normal subjects (14 males and 11 females, ranging between 52 and 80 years of age) to analyze whether the method is effective. In addition, we used the receiver operating characteristic curve (ROC) for the sensitivity analysis, and determined the threshold value of the tongue deviation angle for the warning sign of a stroke. RESULTS: The means and standard deviations of the tongue deviation angles of the stroke, TIA, and normal groups were: 6.9 [PLUS-MINUS SIGN] 3.1, 4.9 [PLUS-MINUS SIGN] 2.1 and 1.4 [PLUS-MINUS SIGN] 0.8 degrees, respectively. Analyzed by the unpaired Student’s t-test, the p-value between the stroke group and the TIA group was 0.015 (>0.01), indicating no significant difference in the tongue deviation angle. The p-values between the stroke group and the normal group, as well as between the TIA group and the normal group were both less than 0.01. These results show the significant differences in the tongue deviation angle between the patient groups (stroke and TIA patients) and the normal group. These results also imply that the tongue deviation angle can effectively identify the patient group (stroke and TIA patients) and the normal group. With respect to the visual examination, 40% and 32% of stroke patients, 24% and 16% of TIA patients, and 4% and 0% of normal subjects were found to have tongue deviations when physicians “A” and “B” examined them. The variation showed the essentiality of the quantification method in a clinical setting. In the receiver operating characteristic curve (ROC), the Area Under Curve (AUC, = 0.96) indicates good discrimination. The tongue deviation angle more than the optimum threshold value (= 3.2[DEGREE SIGN]) predicts a risk of stroke. CONCLUSIONS: In summary, we developed an effective quantification method to characterize the tongue deviation angle, and we confirmed the feasibility of recognizing the tongue deviation angle as an early warning sign of an impending stroke.
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
Despite a large amount of microRNAs (miRNAs) have been validated to play crucial roles in human biology and disease, there is little systematic insight into the nature and scale of the potential synergistic interactions executed by miRNAs themselves. Here we established an integrated parameter synergy score to determine miRNA synergy, by combining the two mechanisms for miRNA-miRNA interactions, miRNA-mediated gene co-regulation and functional association between target gene products, into one single parameter. Receiver operating characteristic (ROC) analysis indicated that synergy score accurately identified the gene ontology-defined miRNA synergy (AUC = 0.9415, p<0.001). Only a very small portion of the random miRNA-miRNA combinations generated potent synergy, implying poor expectancy of widespread synergy. However, targeting more key genes made two miRNAs more likely to act synergistically. Compared to other miRNAs, miR-21 was a highly exceptional case due to frequent appearance in the top synergistic miRNA pairs. This result highlighted its essential role in coordinating or strengthening physiological and pathological functions of other miRNAs. The synergistic effect of miR-21 and miR-1 were functionally validated for their significant influences on myocardial apoptosis, cardiac hypertrophy and fibrosis. The novel approach established in this study enables easy and effective identification of condition-restricted potent miRNA synergy simply by concentrating the available protein interactomics and miRNA-target interaction data into a single parameter synergy score. Our results may be important for understanding synergistic gene regulation by miRNAs and may have significant implications for miRNA combination therapy of cardiovascular disease.
Earlier detection of colorectal cancer greatly improves prognosis, largely through surgical excision of neoplastic polyps. These include benign adenomas which can transform over time to malignant adenocarcinomas. This progression may be associated with changes in full blood count indices. An existing risk algorithm derived in Israel stratifies individuals according to colorectal cancer risk using full blood count data, but has not been validated in the UK. We undertook a retrospective analysis using the Clinical Practice Research Datalink. Patients aged over 40 with full blood count data were risk-stratified and followed up for a diagnosis of colorectal cancer over a range of time intervals. The primary outcome was the area under the receiver operating characteristic curve for the 18-24-month interval. We also undertook a case-control analysis (matching for age, sex, and year of risk score), and a cohort study of patients undergoing full blood count testing during 2012, to estimate predictive values. We included 2,550,119 patients. The area under the curve for the 18-24-month interval was 0.776 [95% confidence interval (CI): 0.771, 0.781]. Performance improves as the time interval reduces. The area under the curve for the age-matched case-control analysis was 0.583 [0.574, 0.591]. For the population risk-scored in 2012, the positive predictive value at 99.5% specificity was 8.8% with negative predictive value 99.6%. The algorithm offers an additional means of identifying risk of colorectal cancer, and could support other approaches to early detection, including screening and active case finding.
Purpose To determine if patient survival and mechanisms of right ventricular failure in pulmonary hypertension could be predicted by using supervised machine learning of three-dimensional patterns of systolic cardiac motion. Materials and Methods The study was approved by a research ethics committee, and participants gave written informed consent. Two hundred fifty-six patients (143 women; mean age ± standard deviation, 63 years ± 17) with newly diagnosed pulmonary hypertension underwent cardiac magnetic resonance (MR) imaging, right-sided heart catheterization, and 6-minute walk testing with a median follow-up of 4.0 years. Semiautomated segmentation of short-axis cine images was used to create a three-dimensional model of right ventricular motion. Supervised principal components analysis was used to identify patterns of systolic motion that were most strongly predictive of survival. Survival prediction was assessed by using difference in median survival time and area under the curve with time-dependent receiver operating characteristic analysis for 1-year survival. Results At the end of follow-up, 36% of patients (93 of 256) died, and one underwent lung transplantation. Poor outcome was predicted by a loss of effective contraction in the septum and free wall, coupled with reduced basal longitudinal motion. When added to conventional imaging and hemodynamic, functional, and clinical markers, three-dimensional cardiac motion improved survival prediction (area under the receiver operating characteristic curve, 0.73 vs 0.60, respectively; P < .001) and provided greater differentiation according to difference in median survival time between high- and low-risk groups (13.8 vs 10.7 years, respectively; P < .001). Conclusion A machine-learning survival model that uses three-dimensional cardiac motion predicts outcome independent of conventional risk factors in patients with newly diagnosed pulmonary hypertension. Online supplemental material is available for this article.
Purpose To evaluate the efficacy of deep convolutional neural networks (DCNNs) for detecting tuberculosis (TB) on chest radiographs. Materials and Methods Four deidentified HIPAA-compliant datasets were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs. The datasets were split into training (68.0%), validation (17.1%), and test (14.9%). Two different DCNNs, AlexNet and GoogLeNet, were used to classify the images as having manifestations of pulmonary TB or as healthy. Both untrained and pretrained networks on ImageNet were used, and augmentation with multiple preprocessing techniques. Ensembles were performed on the best-performing algorithms. For cases where the classifiers were in disagreement, an independent board-certified cardiothoracic radiologist blindly interpreted the images to evaluate a potential radiologist-augmented workflow. Receiver operating characteristic curves and areas under the curve (AUCs) were used to assess model performance by using the DeLong method for statistical comparison of receiver operating characteristic curves. Results The best-performing classifier had an AUC of 0.99, which was an ensemble of the AlexNet and GoogLeNet DCNNs. The AUCs of the pretrained models were greater than that of the untrained models (P < .001). Augmenting the dataset further increased accuracy (P values for AlexNet and GoogLeNet were .03 and .02, respectively). The DCNNs had disagreement in 13 of the 150 test cases, which were blindly reviewed by a cardiothoracic radiologist, who correctly interpreted all 13 cases (100%). This radiologist-augmented approach resulted in a sensitivity of 97.3% and specificity 100%. Conclusion Deep learning with DCNNs can accurately classify TB at chest radiography with an AUC of 0.99. A radiologist-augmented approach for cases where there was disagreement among the classifiers further improved accuracy. (©) RSNA, 2017.
Purpose To investigate whether multivariate pattern recognition analysis of arterial spin labeling (ASL) perfusion maps can be used for classification and single-subject prediction of patients with Alzheimer disease (AD) and mild cognitive impairment (MCI) and subjects with subjective cognitive decline (SCD) after using the W score method to remove confounding effects of sex and age. Materials and Methods Pseudocontinuous 3.0-T ASL images were acquired in 100 patients with probable AD; 60 patients with MCI, of whom 12 remained stable, 12 were converted to a diagnosis of AD, and 36 had no follow-up; 100 subjects with SCD; and 26 healthy control subjects. The AD, MCI, and SCD groups were divided into a sex- and age-matched training set (n = 130) and an independent prediction set (n = 130). Standardized perfusion scores adjusted for age and sex (W scores) were computed per voxel for each participant. Training of a support vector machine classifier was performed with diagnostic status and perfusion maps. Discrimination maps were extracted and used for single-subject classification in the prediction set. Prediction performance was assessed with receiver operating characteristic (ROC) analysis to generate an area under the ROC curve (AUC) and sensitivity and specificity distribution. Results Single-subject diagnosis in the prediction set by using the discrimination maps yielded excellent performance for AD versus SCD (AUC, 0.96; P < .01), good performance for AD versus MCI (AUC, 0.89; P < .01), and poor performance for MCI versus SCD (AUC, 0.63; P = .06). Application of the AD versus SCD discrimination map for prediction of MCI subgroups resulted in good performance for patients with MCI diagnosis converted to AD versus subjects with SCD (AUC, 0.84; P < .01) and fair performance for patients with MCI diagnosis converted to AD versus those with stable MCI (AUC, 0.71; P > .05). Conclusion With automated methods, age- and sex-adjusted ASL perfusion maps can be used to classify and predict diagnosis of AD, conversion of MCI to AD, stable MCI, and SCD with good to excellent accuracy and AUC values. (©) RSNA, 2016.
Sepsis is a leading cause of death and is the most expensive condition to treat in U.S. hospitals. Despite targeted efforts to automate earlier detection of sepsis, current techniques rely exclusively on using either standard clinical data or novel biomarker measurements. In this study, we apply machine learning techniques to assess the predictive power of combining multiple biomarker measurements from a single blood sample with electronic medical record data (EMR) for the identification of patients in the early to peak phase of sepsis in a large community hospital setting. Combining biomarkers and EMR data achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.81, while EMR data alone achieved an AUC of 0.75. Furthermore, a single measurement of six biomarkers (IL-6, nCD64, IL-1ra, PCT, MCP1, and G-CSF) yielded the same predictive power as collecting an additional 16 hours of EMR data(AUC of 0.80), suggesting that the biomarkers may be useful for identifying these patients earlier. Ultimately, supervised learning using a subset of biomarker and EMR data as features may be capable of identifying patients in the early to peak phase of sepsis in a diverse population and may provide a tool for more timely identification and intervention.