BACKGROUND: The clinical course of Cystic Fibrosis (CF) is usually measured using the percent predicted FEV1 and BMI Z-score referenced against a healthy population, since achieving normality is the ultimate goal of CF care. Referencing against age and sex matched CF peers may provide valuable information for patients and for comparison between CF centers or populations. Here, we used a large database of European CF patients to compute CF specific reference equations for FEV1 and BMI, derived CF-specific percentile charts and compared these European data to their nearest international equivalents. METHODS: 34859 FEV1 and 40947 BMI observations were used to compute European CF specific percentiles. Quantile regression was applied to raw measurements as a function of sex, age and height. Results were compared with the North American equivalent for FEV1 and with the WHO 2007 normative values for BMI. RESULTS: FEV1 and BMI percentiles illustrated the large variability between CF patients receiving the best current care. The European CF specific percentiles for FEV1 were significantly different from those in the USA from an earlier era, with higher lung function in Europe. The CF specific percentiles for BMI declined relative to the WHO standard in older children. Lung function and BMI were similar in the two largest contributing European Countries (France and Germany). CONCLUSION: The CF specific percentile approach applied to FEV1 and BMI allows referencing patients with respect to their peers. These data allow peer to peer and population comparisons in CF patients.
Impact of Adjusting for the Reciprocal Relationship between Maternal Weight and Free Thyroxine during Early Pregnancy.
- Thyroid : official journal of the American Thyroid Association
- Published over 5 years ago
Background: Among euthyroid pregnant women in a large clinical trial, free thyroxine (FT4) measurements below the 2.5th centile were associated with a 17 lb higher weight (2.9 kg/m2) than in the overall study population. We explore this relationship further. Methods: Among 9351 women with second trimester thyrotropin (TSH) measurements between 1st and 98th centiles, we examine: 1) the weight/FT4 relationship; 2) percentages of women in three weight categories at each FT4 decile; 3) FT4 concentrations in three weight categories at each TSH decile; and 4) impact of adjusting FT4 for weight - in the reference group and in 190 subjects with elevated TSH measurements. Results: FT4 values decrease steadily as weight increases (p<0.0001 by ANOVA) among women in the reference group (TSH 0.05 - 3.8 IU/L). TSH follows no consistent pattern with weight. When stratified into weight tertiles, 48% of women at the lowest FT4 decile are heavy; the percentage decreases steadily to 22% at the highest FT4 decile. Median FT4 is lowest in heaviest women regardless of TSH level. In the reference group, weight adjustment reduces overall variance by 2.96%. Fewer FT4 measurements are at either extreme [below 5th FT4 centile - 4.8% before adjustment, 4.7% after adjustment; above 95th FT4 centile - 5.0% and 4.7%, respectively]. Adjustment places more light weight women and fewer heavy women below 5th FT4 centile; the converse above the 95th centile. Between TSH 3.8 and 5 IU/L, the FT4 percentage below 5th FT4 centile is not elevated (3.8% before adjustment, 3.1% after adjustment). Percentage of FT4 values above the 95th centile, however, is lower (1.5% before adjustment, 0.8% after adjustment). Above TSH 5 IU/L, 25% of women have FT4 values below the 5th FT4 centile; weight adjustment raises this to 30%; no FT4 values remain above the 95th FT4 centile. Conclusions: During early pregnancy, TSH values are not associated with weight, unlike non-pregnant adults. Lower average FT4 values among heavy women at all TSH deciles partially explain inter-individual differences in FT4 reference ranges. The continuous reciprocal relationship between weight and FT4 explains lower FT4 with higher weight. Weight adjustment refines FT4 interpretation.
Previous studies suggested that lower vitamin D might be a risk factor for autism spectrum disorders (ASDs). The aim of this study was to estimate the prevalence of ASDs in 3-year-old Chinese children and to examine the association between neonatal vitamin D status and risk of ASDs. We conducted a study of live births who had taken part in expanded newborn screening (NBS), with outpatient follow-up when the children 3-year old. The children were confirmed for ASDs in outpatient by the Autism Diagnostic Interview-Revised and Diagnostic and Statistical Manual of Mental Disorders (DSM)-5 criteria. Intellectual disability (ID) status was defined by the intelligence quotient (IQ < 80) for all the participants. The study design included a 1:4 case to control design. The concentration of 25-hydroxyvitamin D3 [25(OH)D3] in children with ASD and controls were assessed from neonatal dried blood samples. A total of 310 children were diagnosed as having ASDs; thus, the prevalence was 1.11% (95% CI, 0.99% to 1.23%). The concentration of 25(OH)D3 in 310 ASD and 1240 controls were assessed. The median 25(OH)D3 level was significantly lower in children with ASD as compared to controls (p < 0.0001). Compared with the fourth quartiles, the relative risk (RR) of ASDs was significantly increased for neonates in each of the three lower quartiles of the distribution of 25(OH)D3, and increased risk of ASDs by 260% (RR for lowest quartile: 3.6; 95% CI, 1.8 to 7.2; p < 0.001), 150% (RR for second quartile: 2.5; 95% CI, 1.4 to 3.5; p = 0.024), and 90% (RR for third quartile: 1.9; 95% CI, 1.1 to 3.3; p = 0.08), respectively. Furthermore, the nonlinear nature of the ID-risk relationship was more prominent when the data were assessed in deciles. This model predicted the lowest relative risk of ID in the 72rd percentile (corresponding to 48.1 nmol/L of 25(OH)D3). Neonatal vitamin D status was significantly associated with the risk of ASDs and intellectual disability. The nature of those relationships was nonlinear. © 2017 American Society for Bone and Mineral Research.
Chronic kidney disease (CKD) is a worldwide growing epidemic associated with an increased risk of cardiovascular morbidity and mortality. Left ventricular (LV) global longitudinal strain (GLS) is a measure of LV systolic function associated with prognosis in the general population. However, little is known about the association between LV GLS and survival in patients with CKD. The aim of the present study was to investigate the prognostic implications of LV GLS in predialysis and dialysis patients specifically. LV GLS was measured in a retrospective cohort of predialysis and dialysis patients (CKD stage 3b to 5) who underwent clinically indicated echocardiography between 2004 and 2015. Patients were divided into 4 groups according to quartiles of LV GLS: first quartile (LV GLS ≤10.6%, worst function), second quartile (LV GLS 10.7% to 15.1%), third quartile (LV GLS 15.2% to 17.8%), and fourth quartile (LV GLS ≥17.9%, best function). The primary end point was all-cause mortality. Of 304 patients (62 ± 14 years, 66% male), 65% were in predialysis and 35% in dialysis. During a median follow-up of 29 months (interquartile range 16 to 58 months), 34% of patients underwent renal transplantation and 36% died. Patients with LV GLS ≤10.6% showed significantly worse prognosis compared with the other groups (log-rank test, p <0.001). LV GLS ≤10.6% was significantly associated with increased risk of all-cause mortality (hazard ratio 2.18, 95% CI 1.17 to 4.06, p = 0.014) after correcting for age, gender, albumin levels, atrial fibrillation, and renal transplantation. In conclusion, in predialysis and dialysis patients, severely impaired LV GLS is independently associated with an increased risk of mortality.
- JAMA : the journal of the American Medical Association
- Published over 4 years ago
IMPORTANCE Postoperative venous thromboembolism (VTE) rates are widely reported quality metrics soon to be used in pay-for-performance programs. Surveillance bias occurs when some clinicians use imaging studies to detect VTE more frequently than other clinicians. Because they look more, they find more VTE events, paradoxically worsening their hospital’s VTE quality measure performance. A surveillance bias may influence VTE measurement if (1) greater hospital VTE prophylaxis adherence fails to result in lower measured VTE rates, (2) hospitals with characteristics suggestive of higher quality (eg, more accreditations) have greater VTE prophylaxis adherence rates but worse VTE event rates, and (3) higher hospital VTE imaging utilization use rates are associated with higher measured VTE event rates. OBJECTIVE To examine whether a surveillance bias influences the validity of reported VTE rates. DESIGN, SETTING, AND PARTICIPANTS 2010 Hospital Compare and American Hospital Association data from 2838 hospitals were merged. Next, 2009-2010 Medicare claims data for 954 926 surgical patient discharges from 2786 hospitals who were undergoing 1 of 11 major operations were used to calculate VTE imaging (duplex ultrasonography, chest computed tomography/magnetic resonance imaging, and ventilation-perfusion scans) and VTE event rates. MAIN OUTCOMES AND MEASURES The association between hospital VTE prophylaxis adherence and risk-adjusted VTE event rates was examined. The relationship between a summary score of hospital structural characteristics reflecting quality (hospital size, numbers of accreditations/quality initiatives) and performance on VTE prophylaxis and risk-adjusted VTE measures was examined. Hospital-level VTE event rates were compared across VTE diagnostic imaging rate quartiles and with a quantile regression. RESULTS Greater hospital VTE prophylaxis adherence rates were weakly associated with worse risk-adjusted VTE event rates (r2 = 4.2%; P = .03). Hospitals with increasing structural quality scores had higher VTE prophylaxis adherence rates (93.3% vs 95.5%, lowest vs highest quality quartile; P < .001) but worse risk-adjusted VTE rates (4.8 vs 6.4 per 1000, lowest vs highest quality quartile; P < .001). Mean VTE diagnostic imaging rates ranged from 32 studies per 1000 in the lowest imaging use quartile to 167 per 1000 in the highest quartile (P < .001). Risk-adjusted VTE rates increased significantly with VTE imaging use rates in a stepwise fashion, from 5.0 per 1000 in the lowest quartile to 13.5 per 1000 in the highest quartile (P < .001). CONCLUSIONS AND RELEVANCE Hospitals with higher quality scores had higher VTE prophylaxis rates but worse risk-adjusted VTE rates. Increased hospital VTE event rates were associated with increasing hospital VTE imaging use rates. Surveillance bias limits the usefulness of the VTE quality measure for hospitals working to improve quality and patients seeking to identify a high-quality hospital.
- Journal of clinical oncology : official journal of the American Society of Clinical Oncology
- Published about 1 year ago
Purpose To determine the association between the number of patients with multiple myeloma (MM) treated annually at a treatment facility (volume) and all-cause mortality (outcome). Methods Using the National Cancer Database, we identified patients diagnosed with MM between 2003 and 2011. We classified the facilities by quartiles (Q; mean patients with MM treated per year): Q1: < 3.6; Q2: 3.6 to 6.1, Q3: 6.1 to 10.3, and Q4: > 10.3. We used random intercepts to account for clustering of patients within facilities and Cox regression to determine the volume-outcome relationship, adjusting for demographic (sex, age, race, ethnicity), socioeconomic (income, education, insurance type), geographic (area of residence, treatment facility location, travel distance), and comorbid (Charlson-Deyo score) factors and year of diagnosis. Results There were 94,722 patients with MM treated at 1,333 facilities. The median age at diagnosis was 67 years, and 54.7% were men. The median annual facility volume was 6.1 patients per year (range, 0.2 to 109.9). The distribution of patients according to facility volume was: Q1: 5.2%, Q2: 12.6%, Q3: 21.9%, and Q4: 60.3%. The unadjusted median overall survival by facility volume was: Q1: 26.9 months, Q2: 29.1 months, Q3: 31.9 months, and Q4: 49.1 months ( P < .001). Multivariable analysis showed that facility volume was independently associated with all-cause mortality. Compared with patients treated at Q4 facilities, patients treated at lower-quartile facilities had a higher risk of death (Q3 hazard ratio [HR], 1.12 [95% CI, 1.08 to 1.16]; Q2 HR, 1.17 [95% CI, 1.12 to 1.21]; Q1 HR, 1.22 [95% CI, 1.17 to 1.28]). Conclusion Patients who were treated for MM at higher-volume facilities had a lower risk of mortality compared with those who were treated at lower-volume facilities.
The Effectiveness of Drinking and Driving Policies for Different Alcohol-Related Fatalities: A Quantile Regression Analysis
- International journal of environmental research and public health
- Published over 4 years ago
To understand the impact of drinking and driving laws on drinking and driving fatality rates, this study explored the different effects these laws have on areas with varying severity rates for drinking and driving. Unlike previous studies, this study employed quantile regression analysis. Empirical results showed that policies based on local conditions must be used to effectively reduce drinking and driving fatality rates; that is, different measures should be adopted to target the specific conditions in various regions. For areas with low fatality rates (low quantiles), people’s habits and attitudes toward alcohol should be emphasized instead of transportation safety laws because “preemptive regulations” are more effective. For areas with high fatality rates (or high quantiles), “ex-post regulations” are more effective, and impact these areas approximately 0.01% to 0.05% more than they do areas with low fatality rates.
- International journal of environmental research and public health
- Published 22 days ago
Background : Previous studies have demonstrated that high levels of physician empathy may be correlated with improved patient health outcomes and high physician job satisfaction. Knowledge about variation in empathy and related general practitioner (GP) characteristics may allow for a more informed approach to improve empathy among GPs.Objective: Our objective is to measure and analyze variation in physician empathy and its association with GP demographic, professional, and job satisfaction characteristics.Methods: 464 Danish GPs responded to a survey containing the Danish version of the Jefferson Scale of Empathy for Health Professionals (JSE-HP) and questions related to their demographic, professional and job satisfaction characteristics. Descriptive statistics and a quantile plot of the ordered empathy scores were used to describe empathy variation. In addition, random-effect logistic regression analysis was performed to explore the association between empathy levels and the included GP characteristics.Results: Empathy scores were negatively skewed with a mean score of 117.9 and a standard deviation of 10.1 within a range from 99 (p5) to 135 (p95). GPs aged 45-54 years and GPs who are not employed outside of their practice were less likely to have high empathy scores (≥120). Neither gender, nor length of time since specialization, length of time in current practice, practice type, practice location, or job satisfaction was associated with odds of having high physician empathy. However, odds of having a high empathy score were higher for GPs who stated that the physician-patient relationship and interaction with colleagues has a high contribution to job satisfaction compared to the reference groups (low and medium contribution of these factors). This was also the trend for GPs who stated a high contribution to job satisfaction from intellectual stimulation. In contrast, high contribution of economic profit and prestige did not contribute to increased odds of having a high empathy score.Conclusions: Albeit generally high, we observed substantial variation in physician empathy levels among this population of Danish GPs. This variation is positively associated with values of interpersonal relationships and interaction with colleagues, and negatively associated with middle age (45-54 years) and lack of outside employment. There is room to increase GP physician empathy via educational and organizational interventions, and consequently, to improve healthcare quality and outcomes.
This study compares the impact of sugar-sweetened beverages (SSBs) tax between moderate and high consumers in Australia. The key methodological contribution is that price response heterogeneity is identified while controlling for censoring of consumption at zero and endogeneity of expenditure by using a finite mixture instrumental variable Tobit model. The SSB price elasticity estimates show a decreasing trend across increasing consumption quantiles, from -2.3 at the median to -0.2 at the 95th quantile. Although high consumers of SSBs have a less elastic demand for SSBs, their very high consumption levels imply that a tax would achieve higher reduction in consumption and higher health gains. Our results also suggest that an SSB tax would represent a small fiscal burden for consumers whatever their pre-policy level of consumption, and that an excise tax should be preferred to an ad valorem tax. Copyright © 2015 John Wiley & Sons, Ltd.
The hospital at which liver transplantation (LT) is performed has a substantial impact on post-LT outcomes. Center-specific outcome data are closely monitored not only by the centers themselves but also by patients and government regulatory agencies. However, the true magnitude of this center effect, apart from the effects of the region and donor service area (DSA) as well as recipient and donor determinants of graft survival, has not been examined. We analyzed data submitted to the Organ Procurement and Transplantation Network for all adult (age ≥ 18 years) primary LT recipients (2005-2008). Using a mixed effects, proportional hazards regression analysis, we modeled graft failure within 1 year after LT on the basis of center (de-identified), region, DSA, and donor and recipient characteristics. At 115 unique centers, 14,654 recipients underwent transplantation. Rates of graft loss within a year varied from 5.9% for the lowest quartile of centers to 20.2% for the highest quartile. Gauged by a comparison of the 75th and 25th percentiles of the data, the magnitude of the center effect on graft survival (1.49-fold change) was similar to that of the recipient Model for End-Stage Liver Disease (MELD) score (1.47) and the donor risk index (DRI; 1.45). The center effect was similar across the DRI and MELD score quartiles and was not associated with a center’s annual LT volume. After stratification by region and DSA, the magnitude of the center effect, though decreased, remained significant and substantial (1.30-fold interquartile difference). In conclusion, the LT center is a significant predictor of graft failure that is independent of region and DSA as well as donor and recipient characteristics. Liver Transpl, 2013. © 2013 AASLD.