Concept: Publication bias
The peer review process is a cornerstone of biomedical research publications. However, it may fail to allow the publication of high-quality articles. We aimed to identify and sort, according to their importance, all tasks that are expected from peer reviewers when evaluating a manuscript reporting the results of a randomized controlled trial (RCT) and to determine which of these tasks are clearly requested by editors in their recommendations to peer reviewers.
The glucose view of self-control posited glucose as the physiological substrate of self-control “resource”, which results in three direct corollaries: 1) engaging in a specific self-control activity would result in reduced glucose level; 2) the remaining glucose level after initial exertion of self-control would be positively correlated with following self-control performance; 3) restoring glucose by ingestion would help to improve the impaired self-control performance. The current research conducted a meta-analysis to test how well each of the three corollaries of the glucose view would be empirically supported. We also tested the restoring effect of glucose rinsing on subsequent self-control performance after initial exertion. The results provided clear and consistent evidence against the glucose view of self-control such that none of the three corollaries was supported. In contrast, the effect of glucose rinsing turned out to be significant, but with alarming signs of publication bias. The implications and future directions are discussed.
Publication bias jeopardizes evidence-based medicine, mainly through biased literature syntheses. Publication bias may also affect laboratory animal research, but evidence is scarce.
Randomised trials are a central component of all evidence-informed health care systems and the evidence coming from them helps to support health care users, health professionals and others to make more informed decisions about treatment. The evidence available to trialists to support decisions on design, conduct and reporting of randomised trials is, however, sparse. Trial Forge is an initiative that aims to increase the evidence base for trial decision-making and in doing so, to improve trial efficiency.One way to fill gaps in evidence is to run Studies Within A Trial, or SWATs. This guidance document provides a brief definition of SWATs, an explanation of why they are important and some practical ‘top tips’ that come from existing experience of doing SWATs. We hope the guidance will be useful to trialists, methodologists, funders, approvals agencies and others in making clear what a SWAT is, as well as what is involved in doing one.
How should we approach trial design when we can get some, but not all, of the way to the numbers required for a randomised phase III trial?We present an ordered framework for designing randomised trials to address the problem when the ideal sample size is considered larger than the number of participants that can be recruited in a reasonable time frame. Staying with the frequentist approach that is well accepted and understood in large trials, we propose a framework that includes small alterations to the design parameters. These aim to increase the numbers achievable and also potentially reduce the sample size target. The first step should always be to attempt to extend collaborations, consider broadening eligibility criteria and increase the accrual time or follow-up time. The second set of ordered considerations are the choice of research arm, outcome measures, power and target effect. If the revised design is still not feasible, in the third step we propose moving from two- to one-sided significance tests, changing the type I error rate, using covariate information at the design stage, re-randomising patients and borrowing external information.We discuss the benefits of some of these possible changes and warn against others. We illustrate, with a worked example based on the Euramos-1 trial, the application of this framework in designing a trial that is feasible, while still providing a good evidence base to evaluate a research treatment.This framework would allow appropriate evaluation of treatments when large-scale phase III trials are not possible, but where the need for high-quality randomised data is as pressing as it is for common diseases.
The peer review process is a cornerstone of biomedical research. We aimed to evaluate the impact of interventions to improve the quality of peer review for biomedical publications.
Pain is multi-dimensional and may be better addressed through a holistic, biopsychosocial approach. Massage therapy is commonly practiced among patients seeking pain management; however, its efficacy is unclear. This systematic review and meta-analysis is the first to rigorously assess the quality of the evidence for massage therapy’s efficacy in treating pain, function-related, and health-related quality of life outcomes in surgical pain populations.
Randomised controlled trials (RCTs) are essential for evidence-based medicine and increasingly rely on front-line clinicians to recruit eligible patients. Clinicians' difficulties with negotiating equipoise is assumed to undermine recruitment, although these issues have not yet been empirically investigated in the context of observable events. We aimed to investigate how clinicians conveyed equipoise during RCT recruitment appointments across six RCTs, with a view to (i) identifying practices that supported or hindered equipoise communication and (ii) exploring how clinicians' reported intentions compared with their actual practices.
Meta-analyses play an important role in cumulative science by combining information across multiple studies and attempting to provide effect size estimates corrected for publication bias. Research on the reproducibility of meta-analyses reveals that errors are common, and the percentage of effect size calculations that cannot be reproduced is much higher than is desirable. Furthermore, the flexibility in inclusion criteria when performing a meta-analysis, combined with the many conflicting conclusions drawn by meta-analyses of the same set of studies performed by different researchers, has led some people to doubt whether meta-analyses can provide objective conclusions.
Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge ( www.trialforge.org ) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants' views on the processes in the life of a randomised trial that should be covered by Trial Forge.General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a ‘go to’ website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.