Concept: Publication bias
Previous meta-analyses comparing the efficacy of psychotherapeutic interventions for depression were clouded by a limited number of within-study treatment comparisons. This study used network meta-analysis, a novel methodological approach that integrates direct and indirect evidence from randomised controlled studies, to re-examine the comparative efficacy of seven psychotherapeutic interventions for adult depression.
Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way.
The evidence that many of the findings in the published literature may be unreliable is compelling. There is an excess of positive results, often from studies with small sample sizes, or other methodological limitations, and the conspicuous absence of null findings from studies of a similar quality. This distorts the evidence base, leading to false conclusions and undermining scientific progress. Central to this problem is a peer-review system where the decisions of authors, reviewers, and editors are more influenced by impressive results than they are by the validity of the study design. To address this, BMC Psychology is launching a pilot to trial a new ‘results-free’ peer-review process, whereby editors and reviewers are blinded to the study’s results, initially assessing manuscripts on the scientific merits of the rationale and methods alone. The aim is to improve the reliability and quality of published research, by focusing editorial decisions on the rigour of the methods, and preventing impressive ends justifying poor means.
The aim of this systematic review was to examine the effect of Contrast Water Therapy (CWT) on recovery following exercise induced muscle damage. Controlled trials were identified from computerized literature searching and citation tracking performed up to February 2013. Eighteen trials met the inclusion criteria; all had a high risk of bias. Pooled data from 13 studies showed that CWT resulted in significantly greater improvements in muscle soreness at the five follow-up time points (<6, 24, 48, 72 and 96 hours) in comparison to passive recovery. Pooled data also showed that CWT significantly reduced muscle strength loss at each follow-up time (<6, 24, 48, 72 and 96 hours) in comparison to passive recovery. Despite comparing CWT to a large number of other recovery interventions, including cold water immersion, warm water immersion, compression, active recovery and stretching, there was little evidence for a superior treatment intervention. The current evidence base shows that CWT is superior to using passive recovery or rest after exercise; the magnitudes of these effects may be most relevant to an elite sporting population. There seems to be little difference in recovery outcome between CWT and other popular recovery interventions.
Guidelines recommend exercise for cardiovascular health, although evidence from trials linking exercise to cardiovascular health through intermediate biomarkers remains inconsistent. We performed a meta-analysis of randomized controlled trials to quantify the impact of exercise on cardiorespiratory fitness and a variety of conventional and novel cardiometabolic biomarkers in adults without cardiovascular disease.
Randomised trials can provide excellent evidence of treatment benefit in medicine. Over the last 50 years, they have been cemented in the regulatory requirements for the approval of new treatments. Randomised trials make up a large and seemingly high-quality proportion of the medical evidence-base. However, it has also been acknowledged that a distorted evidence-base places a severe limitation on the practice of evidence-based medicine (EBM). We describe four important ways in which the evidence from randomised trials is limited or partial: the problem of applying results, the problem of bias in the conduct of randomised trials, the problem of conducting the wrong trials and the problem of conducting the right trials the wrong way. These problems are not intrinsic to the method of randomised trials or the EBM philosophy of evidence; nevertheless, they are genuine problems that undermine the evidence that randomised trials provide for decision-making and therefore undermine EBM in practice. Finally, we discuss the social dimensions of these problems and how they highlight the indispensable role of judgement when generating and using evidence for medicine. This is the paradox of randomised trial evidence: the trials open up expert judgment to scrutiny, but this scrutiny in turn requires further expertise.
To summarise the evidence from randomised controlled trials of mechanical chest compression devices used during resuscitation after out of hospital cardiac arrest.
To clarify the association between cranberries intake and the prevention of urinary tract infections (UTIs).
Pain is multi-dimensional and may be better addressed through a holistic, biopsychosocial approach. Massage therapy is commonly practiced among patients seeking pain management; however, its efficacy is unclear. This systematic review and meta-analysis is the first to rigorously assess the quality of massage therapy research and evidence for its efficacy in treating pain, function-related and health-related quality of life outcomes across all pain populations.
Randomised trials are a central component of all evidence-informed health care systems and the evidence coming from them helps to support health care users, health professionals and others to make more informed decisions about treatment. The evidence available to trialists to support decisions on design, conduct and reporting of randomised trials is, however, sparse. Trial Forge is an initiative that aims to increase the evidence base for trial decision-making and in doing so, to improve trial efficiency.One way to fill gaps in evidence is to run Studies Within A Trial, or SWATs. This guidance document provides a brief definition of SWATs, an explanation of why they are important and some practical ‘top tips’ that come from existing experience of doing SWATs. We hope the guidance will be useful to trialists, methodologists, funders, approvals agencies and others in making clear what a SWAT is, as well as what is involved in doing one.