OBJECTIVE: To investigate whether biologic-free remission can be achieved in patients with early, active axial spondyloarthritis (SpA) who were in partial remission after 28 weeks of infliximab (IFX)+naproxen (NPX) or placebo (PBO)+NPX treatment and whether treatment with NPX was superior to no treatment to maintain disease control. METHOD: Infliximab as First-Line Therapy in Patients with Early Active Axial Spondyloarthritis Trial (INFAST) Part 1 was a double-blind, randomised, controlled trial in biologic-naïve patients with early, active, moderate-to-severe axial SpA treated with either IFX 5 mg/kg+NPX 1000 mg/d or PBO+NPX 1000 mg/d for 28 weeks. Patients achieving Assessment of SpondyloArthritis international Society (ASAS) partial remission at week 28 continued to Part 2 and were randomised (1:1) to NPX or no treatment until week 52. Treatment group differences in ASAS partial remission and other efficacy variables were assessed through week 52 with Fisher exact tests. RESULTS: At week 52, similar percentages of patients in the NPX group (47.5%, 19/40) and the no-treatment group (40.0%, 16/40) maintained partial remission, p=0.65. Median duration of partial remission was 23 weeks in the NPX group and 12.6 weeks in the no-treatment group (p=0.38). Mean Bath Ankylosing Spondylitis Disease Activity Index scores were low at week 28, the start of follow-up treatment (NPX, 0.7; no treatment, 0.6), and remained low at week 52 (NPX, 1.2; no treatment, 1.7). CONCLUSIONS: In axial SpA patients who reached partial remission after treatment with either IFX+NPX or NPX alone, disease activity remained low, and about half of patients remained in remission during 6 months in which NPX was continued or all treatments were stopped.
Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from ‘outbreaks’ consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954-2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor’s power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related.
- CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne
- Published about 2 years ago
Meta-analyses of continuous outcomes typically provide enough information for decision-makers to evaluate the extent to which chance can explain apparent differences between interventions. The interpretation of the magnitude of these differences - from trivial to large - can, however, be challenging. We investigated clinicians' understanding and perceptions of usefulness of 6 statistical formats for presenting continuous outcomes from meta-analyses (standardized mean difference, minimal important difference units, mean difference in natural units, ratio of means, relative risk and risk difference).
Initial studies of heartworm preventive drugs all yielded an observed efficacy of 100% with a single dose, and based on these data the US Food and Drug Administration (FDA) required all products to meet this standard for approval. Those initial studies, however, were based on just a few strains of parasites, and therefore were not representative of the full assortment of circulating biotypes. This issue has come to light in recent years, where it has become common for studies to yield less than 100% efficacy. This has changed the landscape for the testing of new products because heartworm efficacy studies lack the statistical power to conclude that finding zero worms is different from finding a few worms.
Handgrip strength is an important biomarker of healthy ageing and a powerful predictor of future morbidity and mortality both in younger and older populations. Therefore, the measurement of handgrip strength is increasingly used as a simple but efficient screening tool for health vulnerability. This study presents normative reference values for handgrip strength in Germany for use in research and clinical practice. It is the first study to provide normative data across the life course that is stratified by sex, age, and body height. The study used a nationally representative sample of test participants ages 17-90. It was based on pooled data from five waves of the German Socio-Economic Panel (2006-2014) and involved a total of 11,790 persons living in Germany (providing 25,285 observations). Handgrip strength was measured with a Smedley dynamometer. Results showed that peak mean values of handgrip strength are reached in men’s and women’s 30s and 40s after which handgrip strength declines in linear fashion with age. Following published recommendations, the study used a cut-off at 2 SD below the sex-specific peak mean value across the life course to define a ‘weak grip’. Less than 10% of women and men aged 65-69 were classified as weak according to this definition, shares increasing to about half of the population aged 80-90. Based on survival analysis that linked handgrip strength to a relevant outcome, however, a ‘critically weak grip’ that warrants further examination was estimated to commence already at 1 SD below the group-specific mean value.
After radioactive incidents, the exposure risk in daily activities among children is a major public concern. However, there are limited methods available for evaluation of this risk, which is essential to future health risk management. To this end, this study assessed the relationship between behavioral patterns of school children and radiation exposure for a period of 18-20 months following the 2011 Fukushima nuclear incident. The assessed population comprised 520 school children from Minamisoma city, located 20 km north of the nuclear plant. Data for the doses were obtained using individual dosimeters and from results of a behavior survey administered by the City Office. The mean value of the doses in the study period was 0.34 mSv, with a standard deviation of 0.14 mSv, indicating an annual dose of ∼1.36 mSv, which includes doses from natural sources. Our results showed that behavior with respect to outdoor activities had no statistically significant relationship to the dose. A 0.1 μSv/h increase in the air dose rate at home was associated with a 10% increase in the dose; however, a 0.01 μSv/h increase in the air dose rate on the school grounds was associated with a 2% increase in the dose. This study indicates that the air contamination levels at the places where children spend most of their day are the significant predictors of the dose, as opposed to the levels at those outdoor locations in which short periods of time spent.
BACKGROUND: We sought to characterise the frequency, health outcomes and economic consequences of diagnostic errors in the USA through analysis of closed, paid malpractice claims. METHODS: We analysed diagnosis-related claims from the National Practitioner Data Bank (1986-2010). We describe error type, outcome severity and payments (in 2011 US dollars), comparing diagnostic errors to other malpractice allegation groups and inpatient to outpatient within diagnostic errors. RESULTS: We analysed 350 706 paid claims. Diagnostic errors (n=100 249) were the leading type (28.6%) and accounted for the highest proportion of total payments (35.2%). The most frequent outcomes were death, significant permanent injury, major permanent injury and minor permanent injury. Diagnostic errors more often resulted in death than other allegation groups (40.9% vs 23.9%, p<0.001) and were the leading cause of claims-associated death and disability. More diagnostic error claims were outpatient than inpatient (68.8% vs 31.2%, p<0.001), but inpatient diagnostic errors were more likely to be lethal (48.4% vs 36.9%, p<0.001). The inflation-adjusted, 25-year sum of diagnosis-related payments was US$38.8 billion (mean per-claim payout US$386 849; median US$213 250; IQR US$74 545-484 500). Per-claim payments for permanent, serious morbidity that was 'quadriplegic, brain damage, lifelong care' (4.5%; mean US$808 591; median US$564 300), 'major' (13.3%; mean US$568 599; median US$355 350), or 'significant' (16.9%; mean US$419 711; median US$269 255) exceeded those where the outcome was death (40.9%; mean US$390 186; median US$251 745). CONCLUSIONS: Among malpractice claims, diagnostic errors appear to be the most common, most costly and most dangerous of medical mistakes. We found roughly equal numbers of lethal and non-lethal errors in our analysis, suggesting that the public health burden of diagnostic errors could be twice that previously estimated. Healthcare stakeholders should consider diagnostic safety a critical health policy issue.
Low vitamin B-12 concentrations are frequently observed among older adults. Malabsorption is hypothesized to be an important cause of vitamin B-12 inadequacy, but serum vitamin B-12 may also be differently affected by vitamin B-12 intake depending on food source. We examined associations between dietary sources of vitamin B-12 (meat, fish and shellfish, eggs, dairy) and serum vitamin B-12, using cross-sectional data of 600 Dutch community-dwelling adults (≥65 years). Dietary intake was assessed with a validated food frequency questionnaire. Vitamin B-12 concentrations were measured in serum. Associations were studied over tertiles of vitamin B-12 intake using P for trend, by calculating prevalence ratios (PRs), and splines. Whereas men had significantly higher vitamin B-12 intakes than women (median (25th-75th percentile): 4.18 (3.29-5.38) versus 3.47 (2.64-4.40) μg/day), serum vitamin B-12 did not differ between the two sexes (mean ± standard deviation (SD): 275 ± 104 pmol/L versus 290 ± 113 pmol/L). Higher intakes of dairy, meat, and fish and shellfish were significantly associated with higher serum vitamin B-12 concentrations, where meat and dairy-predominantly milk were the most potent sources. Egg intake did not significantly contribute to higher serum vitamin B-12 concentrations. Thus, dairy and meat were the most important contributors to serum vitamin B-12, followed by fish and shellfish.
Underwater noise from human activities appears to be rising, with ramifications for acoustically sensitive marine organisms and the functioning of marine ecosystems. Policymakers are beginning to address the risk of ecological impact, but are constrained by a lack of data on current and historic noise levels. Here, we present the first nationally coordinated effort to quantify underwater noise levels, in support of UK policy objectives under the EU Marine Strategy Framework Directive (MSFD). Field measurements were made during 2013-2014 at twelve sites around the UK. Median noise levels ranged from 81.5-95.5 dB re 1 μPa for one-third octave bands from 63-500 Hz. Noise exposure varied considerably, with little anthropogenic influence at the Celtic Sea site, to several North Sea sites with persistent vessel noise. Comparison of acoustic metrics found that the RMS level (conventionally used to represent the mean) was highly skewed by outliers, exceeding the 97(th) percentile at some frequencies. We conclude that environmental indicators of anthropogenic noise should instead use percentiles, to ensure statistical robustness. Power analysis indicated that at least three decades of continuous monitoring would be required to detect trends of similar magnitude to historic rises in noise levels observed in the Northeast Pacific.
An important motivation for the construction of biobanks is to discover biomarkers that identify diseases at early, potentially curable stages. This will require biobanks from large numbers of individuals, preferably sampled repeatedly, where the samples are collected and stored under conditions that preserve potential biomarkers. Dried blood samples are attractive for biobanking because of the ease and low cost of collection and storage. Here we have investigated their suitability for protein measurements. 92 proteins with relevance for oncology were analyzed using multiplex proximity extension assays (PEA) in dried blood spots collected on paper and stored for up to 30 years at either +4°C or -24°C.
Our main findings were that 1) the act of drying only slightly influenced detection of blood proteins (average correlation of 0.970), and in a reproducible manner (correlation of 0.999), 2) detection of some proteins was not significantly affected by storage over the full range of three decades (34% and 76% of the analyzed proteins at +4°C and -24°C, respectively), while levels of others decreased slowly during storage with half-lives in the range of 10 to 50 years, and 3) detectability of proteins was less affected in dried samples stored at -24°C compared to at +4°C, as the median protein abundance had decreased to 80% and 93% of starting levels after 10 years of storage at +4°C or -24°C, respectively. The results of our study are encouraging as they suggest an inexpensive means to collect large numbers of blood samples, even by the donors themselves, and to transport, and store biobanked samples as spots of whole blood dried on paper. Combined with emerging means to measure hundreds or thousands of protein, such biobanks could prove of great medical value by greatly enhancing discovery as well as routine analysis of blood biomarkers.