Concept: Likelihood-ratio test
Objective: A meta-analysis was performed to summarize the accumulated data on the screening performance of second-trimester sonographic markers for fetal trisomy 21. Methods: We conducted a literature search to identify 47 studies between 1995 and September 2012 that provided data on the incidence of sonographic markers in trisomy 21 and euploid fetuses at 14-24 weeks' gestation. Weighted independent estimations of detection rate, false positive rate, positive and negative likelihood ratios (LR) of markers were calculated. Results: The pooled estimates of positive and negative LR 5.85 (95% CI 5.04-6.80) and 0.80 (95% CI 0.75-0.86) for intracardiac echogenic focus, 25.78 (95% CI 12.85-51.73) and 0.94 (95% CI 0.91-0.98) for ventriculomegaly, 19.18 (95% CI 11.55-31.84) and 0.80 (95% CI 0.75-0.86) for increased nuchal fold, 10.82 (95% CI 8.43-13.72) and 0.90 (95% CI 0.86-0.94) for hyperechogenic bowel, 7.77 (95% CI 6.22-9.71) and 0.92 (95% CI 0.89-0.96) for mild hydronephrosis, 3.72 (95% CI 2.79-4.97) and 0.80 (95% CI 0.73-0.88) for short femur, 4.81 (95% CI 3.49-6.62) and 0.74 (95% CI 0.63-0.88) for short humerus, 21.48 (95% CI 11.48-40.19) and 0.71 (95% CI 0.57-0.88) for ARSA and 23.26 (95% CI 14.23-38.03) and 0.46 (95% CI 0.36-0.58) for absent or hypoplastic nasal bone. The combined negative LR, by multiplying the values of individual markers, was 0.13 (95% CI 0.05-0.29) when short femur but not short humerus was included and 0.12 (95% CI 0.06-0.29) when short humerus but not short femur was included. Conclusion: Presence of sonographic markers increases and absence decreases the risk for trisomy 21. In the case of most isolated markers there is only a small effect on modifying the pre-test odds for trisomy 21, but with ventriculomegaly, nuchal fold thickness and ARSA there is a 3-4 fold increase in risk and with hypoplastic nasal bone a 6-7 fold increase. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Clinimetric Analysis of Pressure Biofeedback and Transversus Abdominis Function in Individuals With Stabilization Classification Low Back Pain.
- The Journal of orthopaedic and sports physical therapy
- Published over 5 years ago
STUDY DESIGN: Descriptive laboratory study. OBJECTIVE: To determine if a proposed clinical test (pressure biofeedback) could detect changes in transversus abdominis (TrA) muscle thickness during an abdominal draw-in maneuver (ADIM). BACKGROUND: Pressure biofeedback may be used to assess abdominal muscle function and TrA activation during an ADIM, but has not been validated. METHODS: Forty-nine individuals (18 male, 31 female) with low back pain who met stabilization classification criteria underwent ultrasound imaging to quantify changes in TrA muscle thickness while a pressure transducer was used to measure pelvic and spine position during an ADIM. Paired t-test was used to compare differences in TrA activation ratio between groups (able or unable to maintain pressure 40 ± 5 mmHg). Groups were further dichotomized based on TrA activation ratio (high > 1.5 or low <1.5). Sensitivity, specificity, and likelihood ratios were calculated. RESULTS: There was not a significant difference (P= .57) in TrA activation ratios (able to maintain pressure 1.59±.28, unable to maintain pressure 1.54±.24) between groups. The pressure biofeedback test had low sensitivity of 0.22 (95% CI: 0.10, 0.42), but moderate specificity of 0.77(95% CI: 0.58, 0.89) and a positive likelihood ratio of 0.94 (95% CI: 0.33, 2.68) and a negative likelihood ratio of 1.02 (95% CI: 0.75, 1.38). CONCLUSIONS: Successful completion on pressure biofeedback does not indicate high TrA activation. Unsuccessful completion on pressure biofeedback maybe more indicative of low TrA activation, but the correlation and likelihood coefficients indicate the pressure test is likely of minimal value to detect TrA activation.J Orthop Sports Phys Ther, Epub 16 November 2012. doi:10.2519/jospt.2013.4397.
OBJECTIVE: To investigate diagnostic test accuracy (DTA) of different tests for obstructive sleep apnea (OSA) compared to polysomnography (PSG) in children. METHODS: We performed a systematic review according to DTA criteria published by the Cochrane Collaboration. Studies that compared any possible diagnostic test with PSG for diagnosing OSA were considered. Study quality assessment was conducted in each selected study and DTA measures recalculated by hand whenever possible. Excellent DTA was defined as positive likelihood ratio (PLR) > 10 and negative likelihood ratio (NLR) < 0.1. RESULTS: We identified 1064 potentially relevant studies, of which 33 met inclusion criteria. Study quality was generally low; 5 studies fulfilled all quality criteria and 11 studies included >100 subjects. Included studies compared 40 different tests to PSG. Only 13 studies used the currently accepted definition for OSA (i.e., apnea hypopnea index ≥1). In these studies, PLR ranged from 1.017 to ∞, NLR from 0 to 1.089. Sleep lab-based polygraphy, urinary biomarkers, and rhinomanometry (one study each) showed excellent DTA. CONCLUSION: There is limited evidence concerning diagnostic alternatives to PSG for identifying OSA in children. However, polygraphy, urinary biomarkers, and rhinomanometry may be valid tests if their apparently high DTA is confirmed by subsequent studies.
Study objective was to characterize the prognostic performance of a novel Breast Cancer Index model (BCIN+), an integration of BCI gene expression, tumor size, and grade, specifically developed for assessment of distant recurrence (DR) risk in HR+ breast cancer patients with 1-3 positive lymph nodes (pN1).
Experimental Design: Analysis was conducted in a well-annotated retrospective series of pN1 patients (N=402) treated with adjuvant endocrine therapy with or without chemotherapy using a pre-specified model. The primary endpoint was time-to-DR. Results were determined blinded to clinical outcome. Kaplan-Meier estimates of overall (0-15y) and late (≥5y) DR, hazard ratios and 95% CIs were estimated. Likelihood ratio statistics assessed relative contributions of prognostic information.
Results: BCIN+ classified 81 patients (20%) as low risk with a 15-year DR rate of 1.3% (95%CI: 0.0-3.7%) vs 321 patients as high risk with a DR rate of 29.0% (23.2-34.4%). In patients DR-free for ≥5y (n=349), the late DR rate was 1.3% (95%CI: 0.0-3.7%) and 16.1% (10.6-21.3%) in low- and high-risk groups, respectively. BCI gene expression alone was significantly prognostic (∆LR-χ2=20.12, P<0.0001). Addition of tumor size (∆LR-χ2=13.29, P=0.0003) and grade (∆LR-χ2=12.72, P=0.0004) significantly improved prognostic performance. BCI added significant prognostic information to tumor size (∆LR-χ2=17.55, P<0.0001); addition to tumor grade was incremental (∆LR-χ2=2.38, P=0.1) with considerable overlap between prognostic values (∆LR-χ2=17.74).
Conclusions: The integrated BCIN+ identified 20% of pN1 patients with limited risk of recurrence over 15y, in whom extended endocrine treatment may be spared. Ongoing studies will characterize combined clinical-genomic risk assessment in node-positive patients.
Biliary atresia (BA) is the leading cause of pediatric end-stage liver disease in the United States. Education of parents in the perinatal period with stool cards depicting acholic and normal stools has been associated with improved time-to-diagnosis and survival in BA. PoopMD is a mobile application that utilizes a smartphone’s camera and color recognition software to analyze an infant’s stool and determine if additional follow-up is indicated. PoopMD was developed using custom HTML5/CSS3 and wrapped to work on iOS and Android platforms. In order to define the gold standard regarding stool color, seven pediatricians were asked to review 45 photographs of infant stool and rate them as acholic, normal, or indeterminate. Samples for which 6+ pediatricians demonstrated agreement defined the gold standard, and only these samples were included in the analysis. Accuracy of PoopMD was assessed using an iPhone 5s with incandescent lighting. Variability in analysis of stool photographs as acholic versus normal with intermediate rating weighted as 50% agreement (kappa) was compared between three laypeople and one expert user. Variability in output was also assessed between an iPhone 5s and a Samsung Galaxy S4, as well as between incandescent lighting and compact fluorescent lighting. Six-plus pediatricians agreed on 27 normal and 7 acholic photographs; no photographs were defined as indeterminate. The sensitivity was 7/7 (100%). The specificity was 24/27 (89%) with 3/27 labeled as indeterminate; no photos of normal stool were labeled as acholic. The Laplace-smoothed positive likelihood ratio was 6.44 (95% CI 2.52 to 16.48) and the negative likelihood ratio was 0.13 (95% CI 0.02 to 0.83). kappauser was 0.68, kappaphone was 0.88, and kappalight was 0.81. Therefore, in this pilot study, PoopMD accurately differentiates acholic from normal color with substantial agreement across users, and almost perfect agreement across two popular smartphones and ambient light settings. PoopMD may be a valuable tool to help parents identify acholic stools in the perinatal period, and provide guidance as to whether additional evaluation with their pediatrician is indicated. PoopMD may improve outcomes for children with BA.
Background. Despite guideline recommendations, chest radiography (CR) for the diagnosis of community-acquired pneumonia (CAP) in children is commonly used also in mild and/or uncomplicated cases. The aim of this study is to assess the reliability of lung ultrasonography (LUS) as an alternative test in these cases and suggest a new diagnostic algorithm. Methods. We reviewed the medical records of all patients admitted to the pediatric ward from February 1, 2013 to December 31, 2014 with respiratory signs and symptoms. We selected only cases with mild/uncomplicated clinical course and in which CR and LUS were performed within 24 h of each other. The LUS was not part of the required exams recorded in medical records but performed independently. The discharge diagnosis, made only on the basis of history and physical examination, laboratory and instrumental tests, including CR (without LUS), was used as a reference test to compare CR and LUS findings. Results. Of 52 selected medical records CAP diagnosis was confirmed in 29 (55.7%). CR was positive in 25 cases, whereas LUS detected pneumonia in 28 cases. Four patients with negative CR were positive in ultrasound findings. Instead, one patient with negative LUS was positive in radiographic findings. The LUS sensitivity was 96.5% (95% CI [82.2%-99.9%]), specificity of 95.6% (95% CI [78.0%-99.9%]), positive likelihood ratio of 22.2 (95% CI [3.2-151.2]), and negative likelihood ratio of 0.04 (95% CI [0.01-0.25]) for diagnosing pneumonia. Conclusion. LUS can be considered as a valid alternative diagnostic tool of CAP in children and its use must be promoted as a first approach in accordance with our new diagnostic algorithm.
BACKGROUND: Data on the combined effect of lifestyles on mortality in older people have generally been collected from highly selected populations and have been limited to traditional health behaviors. In this study, we examined the combined impact of three traditional (smoking, physical activity and diet) and three non-traditional health behaviors (sleep duration, sedentary time and social interaction) on mortality among older adults. METHODS: A cohort of 3,465 individuals, representative of the Spanish population aged [greater than or equal to]60 years, was established in 2000/2001 and followed-up prospectively through 2011. At baseline, the following positive behaviors were self-reported: never smoking or quitting tobacco >15 years, being very or moderately physically active, having a healthy diet score [greater than or equal to] median in the cohort, sleeping 7 to 8 h/d, spending <8 h/d in sitting time, and seeing friends daily. Analyses were performed with Cox regression and adjusted for the main confounders. RESULTS: During an average nine-year follow-up, 1,244 persons died. Hazard ratios (95% confidence interval) for all-cause mortality among participants with two, three, four, five and six compared to those with zero to one positive behaviors were, respectively, 0.63 (0.46 to 0.85), 0.41 (0.31 to 0.55), 0.32 (0.24 to 0.42), 0.26 (0.20 to 0.35) and 0.20 (0.15 to 0.28) (P for trend <0.001). The results were similar regardless of age, sex and health status at baseline. Those with six vs. zero to one positive health behaviors had an all-cause mortality risk equivalent to being 14 years younger. Adding the three non-traditional to the four traditional behaviors improved the model fit (likelihood ratio test, P <0.001) and the accuracy of mortality prediction (c-statistic: + 0.0031, P = 0.040). CONCLUSIONS: Adherence to some traditional and non-traditional health behaviors may substantially reduce mortality risk in older adults.
Screening for red flags in individuals with low back pain (LBP) has been a historical hallmark of musculoskeletal management. Red flag screening is endorsed by most LBP clinical practice guidelines, despite a lack of support for their diagnostic capacity. We share four major reasons why red flag screening is not consistent with best practice in LBP management: (1) clinicians do not actually screen for red flags, they manage the findings; (2) red flag symptomology negates the utility of clinical findings; (3) the tests lack the negative likelihood ratio to serve as a screen; and (4) clinical practice guidelines do not include specific processes that aid decision-making. Based on these findings, we propose that clinicians consider: (1) the importance of watchful waiting; (2) the value-based care does not support clinical examination driven by red flag symptoms; and (3) the recognition that red flag symptoms may have a stronger relationship with prognosis than diagnosis.
Populations continually incur new mutations with fitness effects ranging from lethal to adaptive. While the distribution of fitness effects (DFE) of new mutations is not directly observable, many mutations likely have either no effect on organismal fitness or are deleterious. Historically, it has been hypothesized that a population may carry many mildly deleterious variants as segregating variation, which reduces the mean absolute fitness of the population. Recent advances in sequencing technology and sequence conservation-based metrics for inferring the functional effect of a variant permit examination of the persistence of deleterious variants in populations. The issue of segregating deleterious variation is particularly important for crop improvement, because the demographic history of domestication and breeding allows deleterious variants to persist and reach moderate frequency, potentially reducing crop productivity. In this study, we use exome resequencing of fifteen barley accessions and genome resequencing of eight soybean accessions to investigate the prevalence of deleterious SNPs in the protein-coding regions of the genomes of two crops. We conclude that individual cultivars carry hundreds of deleterious SNPs on average, and that nonsense variants make up a minority of deleterious SNPs. Our approach annotates known phenotype-altering variants as deleterious more frequently than the genome-wide average, suggesting that putatively deleterious variants are likely to affect phenotypic variation. We also report the implementation of a SNP annotation tool (BAD_Mutations) that makes use of a likelihood ratio test based on alignment of all currently publicly available Angiosperm genomes.
BACKGROUND: Diagnosing serious infections in children is challenging, because of the low incidence of such infections and their non-specific presentation early in the course of illness. Prediction rules are promoted as a means to improve recognition of serious infections. A recent systematic review identified seven clinical prediction rules, of which only one had been prospectively validated, calling into question their appropriateness for clinical practice. We aimed to examine the diagnostic accuracy of these rules in multiple ambulatory care populations in Europe. METHODS: Four clinical prediction rules and two national guidelines, based on signs and symptoms, were validated retrospectively in seven individual patient datasets from primary care and emergency departments, comprising 11,023 children from the UK, the Netherlands, and Belgium. The accuracy of each rule was tested, with pre-test and post-test probabilities displayed using dumbbell plots, with serious infection settings stratified as low prevalence (LP; <5%), intermediate prevalence (IP; 5 to 20%), and high prevalence (HP; >20%) . In LP and IP settings, sensitivity should be >90% for effective ruling out infection. RESULTS: In LP settings, a five-stage decision tree and a pneumonia rule had sensitivities of >90% (at a negative likelihood ratio (NLR) of < 0.2) for ruling out serious infections, whereas the sensitivities of a meningitis rule and the Yale Observation Scale (YOS) varied widely, between 33 and 100%. In IP settings, the five-stage decision tree, the pneumonia rule, and YOS had sensitivities between 22 and 88%, with NLR ranging from 0.3 to 0.8. In an HP setting, the five-stage decision tree provided a sensitivity of 23%. In LP or IP settings, the sensitivities of the National Institute for Clinical Excellence guideline for feverish illness and the Dutch College of General Practitioners alarm symptoms ranged from 81 to 100%. CONCLUSIONS: None of the clinical prediction rules examined in this study provided perfect diagnostic accuracy. In LP or IP settings, prediction rules and evidence-based guidelines had high sensitivity, providing promising rule-out value for serious infections in these datasets, although all had a percentage of residual uncertainty. Additional clinical assessment or testing such as point-of-care laboratory tests may be needed to increase clinical certainty. None of the prediction rules identified seemed to be valuable for HP settings such as emergency departments.