Concept: Type I and type II errors
The timely detection of viremia in HIV-infected patients receiving antiviral treatment is key to ensuring effective therapy and preventing the emergence of drug resistance. In high HIV burden settings, the cost and complexity of diagnostics limit their availability. We have developed a novel complementary metal-oxide semiconductor (CMOS) chip based, pH-mediated, point-of-care HIV-1 viral load monitoring assay that simultaneously amplifies and detects HIV-1 RNA. A novel low-buffer HIV-1 pH-LAMP (loop-mediated isothermal amplification) assay was optimised and incorporated into a pH sensitive CMOS chip. Screening of 991 clinical samples (164 on the chip) yielded a sensitivity of 95% (in vitro) and 88.8% (on-chip) at >1000 RNA copies/reaction across a broad spectrum of HIV-1 viral clades. Median time to detection was 20.8 minutes in samples with >1000 copies RNA. The sensitivity, specificity and reproducibility are close to that required to produce a point-of-care device which would be of benefit in resource poor regions, and could be performed on an USB stick or similar low power device.
The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.
Background. Current malaria diagnostics, including microscopy and antigen-detecting rapid tests, cannot reliably detect low-density infections. Molecular methods such as PCR are highly sensitive, but remain too complex for field deployment. A new commercial molecular assay based on loop-mediated isothermal amplification (LAMP) was assessed for field use.Methods. Malaria LAMP (Eiken Chemical Co., Ltd., Japan) was evaluated in 272 outpatients at a rural Ugandan clinic, and compared with expert microscopy, nested PCR (nPCR) and quantitative PCR (qPCR). Two technicians performed the assay after three days of training, using two alternative blood sample preparation methods and visual interpretation of results by fluorescence.Results. Compared with three-well nPCR, the sensitivity of both LAMP and single-well nPCR was 90%; microscopy sensitivity was 51%. For samples with P. falciparum qPCR titer≥2 parasites/µL, LAMP sensitivity was 97.8% (95% CI 93.7%-99.5%). Most false-negative LAMP results occurred in samples with parasitemia detectable by three-well nPCR but very low or undetectable by qPCR.Conclusions. Malaria LAMP in a remote Ugandan clinic achieved sensitivity similar to single-well nPCR in a UK reference laboratory. LAMP dramatically lowers the detection threshold achievable in endemic settings, providing a new tool for diagnosis, surveillance, and screening in elimination strategies.
There is controversy on the proposed benefits of publishing mortality rates for individual surgeons. In some procedures, analysis at the level of an individual surgeon may lack statistical power. The aim was to determine the likelihood that variation in surgeon performance will be detected using published outcome data.
BACKGROUND: Laboratory tests to assess novel oral anticoagulants (NOACs) are under evaluation. Routine monitoring is unnecessary, but under special circumstances bioactivity assessment becomes crucial. We analyzed the effects of NOACs on coagulation tests and the availability of specific assays at different laboratories.METHODS: Plasma samples spiked with dabigatran (Dabi; 120 and 300 μg/L) or rivaroxaban (Riva; 60, 146, and 305 μg/L) were sent to 115 and 38 European laboratories, respectively. International normalized ratio (INR) and activated partial thromboplastin time (APTT) were analyzed for all samples; thrombin time (TT) was analyzed specifically for Dabi and calibrated anti-activated factor X (anti-Xa) activity for Riva. We compared the results with patient samples.RESULTS: Results of Dabi samples were reported by 73 laboratories (13 INR and 9 APTT reagents) and Riva samples by 22 laboratories (5 INR and 4 APTT reagents). Both NOACs increased INR values; the increase was modest, albeit larger, for Dabi, with higher CV, especially with Quick (vs Owren) methods. Both NOACs dose-dependently prolonged the APTT. Again, the prolongation and CVs were larger for Dabi. The INR and APTT results varied reagent-dependently (P < 0.005), with less prolongation in patient samples. TT results (Dabi) and calibrated anti-Xa results (Riva) were reported by only 11 and 8 laboratories, respectively.CONCLUSIONS: The screening tests INR and APTT are suboptimal in assessing NOACs, having high reagent dependence and low sensitivity and specificity. They may provide information, if laboratories recognize their limitations. The variation will likely increase and the sensitivity differ in clinical samples. Specific assays measure NOACs accurately; however, few laboratories applied them.
BACKGROUND: The Hamilton Depression Rating Scale (HAM-D) is commonly used as a screening instrument, as a continuous measure of change in depressive symptoms over time, and as a means to compare the relative efficacy of treatments. Among several abridged versions, the 6-item HAM-D6 is used most widely in large degree because of its good psychometric properties. The current study compares both self-report and clinician-rated versions of the Hebrew version of this scale. METHODS: A total of 153 Israelis 75 years of age on average participated in this study. The HAM-D6 was examined using confirmatory factor analytic (CFA) models separately for both patient and clinician responses. RESULTS: Reponses to the HAM-D6 suggest that this instrument measures a unidimensional construct with each of the scales' six items contributing significantly to the measurement. Comparisons between self-report and clinician versions indicate that responses do not significantly differ for 4 of the 6 items. Moreover, 100% sensitivity (and 91% specificity) was found between patient HAM-D6 responses and clinician diagnoses of depression. CONCLUSION: These results indicate that the Hebrew HAM-D6 can be used to measure and screen for depressive symptoms among elderly patients.
BACKGROUND: Better knowledge of the suprascapular notch anatomy may help to prevent and to assess more accurately suprascapular nerve entrapment syndrome. Our purposes were to verify the reliability of the existing data, to assess the differences between the two genders, to verify the correlation between the dimensions of the scapula and the suprascapular notch, and to investigate the relationship between the suprascapular notch and the postero-superior limit of the safe zone for the suprascapular nerve. METHODS: We examined 500 dried scapulae, measuring seven distances related to the scapular body and suprascapular notch; they were also catalogued according to gender, age and side. Suprascapular notch was classified in accordance with Rengachary’s method. For each class, we also took into consideration the width/depth ratio. Furthermore, Pearson’s correlation was calculated. RESULTS: The frequencies were: Type I 12.4%, Type II 19.8%, Type III 22.8%, Type IV 31.1%, Type V 10.2%, Type VI 3.6%. Width and depth did not demonstrate a statistical significant difference when analyzed according to gender and side; however, a significant difference was found between the depth means elaborated according to median age (73 y.o.). Correlation indexes were weak or not statistically significant. The differences among the postero-superior limits of the safe zone in the six types of notches was not statistically significant. CONCLUSIONS: Patient’s characteristics (gender, age and scapular dimensions) are not related to the characteristics of the suprascapular notch (dimensions and Type); our data suggest that the entrapment syndrome is more likely to be associated with a Type III notch because of its specific features.
Background Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data.Results A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project.Conclusions The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.
INTRODUCTION: We previously derived and validated the AIMS65 score, a mortality prognostic scale for upper GI bleeding (UGIB). OBJECTIVE: To validate the AIMS65 score in a different patient population and compare it with the Glasgow-Blatchford risk score (GBRS). DESIGN: Retrospective cohort study. PATIENTS: Adults with a primary diagnosis of UGIB. MAIN OUTCOME MEASUREMENTS: Primary outcome: inpatient mortality. Secondary outcomes: composite clinical endpoint of inpatient mortality, rebleeding, and endoscopic, radiologic or surgical intervention; blood transfusion; intensive care unit admission; rebleeding; length of stay; timing of endoscopy. The area under the receiver-operating characteristic curve (AUROC) was calculated for each score. RESULTS: Of the 278 study patients, 6.5% died and 35% experienced the composite clinical endpoint. The AIMS65 score was superior in predicting inpatient mortality (AUROC, 0.93 vs 0.68; P < .001), whereas the GBRS was superior in predicting blood transfusions (AUROC, 0.85 vs 0.65; P < .01) The 2 scores were similar in predicting the composite clinical endpoint (AUROC, 0.62 vs 0.68; P = .13) as well as the secondary outcomes. A GBRS of 10 and 12 or more maximized the sum of the sensitivity and specificity for inpatient mortality and rebleeding, respectively. The cutoff was 2 or more for the AIMS65 score for both outcomes. LIMITATIONS: Retrospective, single-center study. CONCLUSION: The AIMS65 score is superior to the GBRS in predicting inpatient mortality from UGIB, whereas the GBRS is superior for predicting blood transfusion. Both scores are similar in predicting the composite clinical endpoint and other outcomes in clinical care and resource use.
Auditory neurons that exhibit stimulus-specific adaptation (SSA) decrease their response to common tones while retaining responsiveness to rare ones. We recorded single-unit responses from the inferior colliculus (IC) where SSA is known to occur and we explored for the first time SSA in the cochlear nucleus (CN) of rats. We assessed an important functional outcome of SSA, the extent to which frequency discriminability depends on sensory context. For this purpose, pure tones were presented in an oddball sequence as standard (high probability of occurrence) or deviant (low probability of occurrence) stimuli. To study frequency discriminability under different probability contexts, we varied the probability of occurrence and the frequency separation between tones. The neuronal sensitivity was estimated in terms of spike-count probability using signal detection theory. We reproduced the finding that many neurons in the IC exhibited SSA, but we did not observe significant SSA in our CN sample. We concluded that strong SSA is not a ubiquitous phenomenon in the CN. As predicted, frequency discriminability was enhanced in IC when stimuli were presented in an oddball context, and this enhancement was correlated with the degree of SSA shown by the neurons. In contrast, frequency discrimination by CN neurons was independent of stimulus context. Our results demonstrated that SSA is not widespread along the entire auditory pathway, and suggest that SSA increases frequency discriminability of single neurons beyond that expected from their tuning curves.