Concept: Info-gap decision theory
Background Formulation and evaluation of public health policy commonly employs science-based mathematical models. For instance, epidemiological dynamics of TB is dominated, in general, by flow between actively and latently infected populations. Thus modelling is central in planning public health intervention. However, models are highly uncertain because they are based on observations that are geographically and temporally distinct from the population to which they are applied.Aims We aim to demonstrate the advantages of info-gap theory, a non-probabilistic approach to severe uncertainty when worst cases cannot be reliably identified and probability distributions are unreliable or unavailable. Info-gap is applied here to mathematical modelling of epidemics and analysis of public health decision-making.Methods Applying info-gap robustness analysis to tuberculosis/HIV (TB/HIV) epidemics, we illustrate the critical role of incorporating uncertainty in formulating recommendations for interventions. Robustness is assessed as the magnitude of uncertainty that can be tolerated by a given intervention. We illustrate the methodology by exploring interventions that alter the rates of diagnosis, cure, relapse and HIV infection.Results We demonstrate several policy implications. Equivalence among alternative rates of diagnosis and relapse are identified. The impact of initial TB and HIV prevalence on the robustness to uncertainty is quantified. In some configurations, increased aggressiveness of intervention improves the predicted outcome but also reduces the robustness to uncertainty. Similarly, predicted outcomes may be better at larger target times, but may also be more vulnerable to model error.Conclusions The info-gap framework is useful for managing model uncertainty and is attractive when uncertainties on model parameters are extreme. When a public health model underlies guidelines, info-gap decision theory provides valuable insight into the confidence of achieving agreed-upon goals.
BACKGROUND: The advent of endoscopic sphenopalatine artery ligation (ESPAL) for the control of posterior epistaxis provides an effective, low-morbidity treatment option. In the current practice algorithm, ESPAL is pursued after failure of posterior packing. Given the morbidity and limited effectiveness of posterior packing, we sought to determine the cost-effectiveness of first-line ESPAL compared to the current practice model. METHODS: A standard decision analysis model was constructed comparing first-line ESPAL and current practice algorithms. A literature search was performed to determine event probabilities and published Medicare data largely provided cost parameters. The primary outcomes were cost of treatment and resolution of epistaxis. One-way sensitivity analysis was performed for key parameters. RESULTS: Costs for the first-line ESPAL arm and the current practice arm were $6450 and $8246, respectively. One-way sensitivity analyses were performed for key variables including duration of packing. The baseline difference of $1796 in favor of the first-line ESPAL arm was increased to $6263 when the duration of nasal packing was increased from 3 to 5 days. Current practice was favored (cost savings of $437 per patient) if posterior packing duration was decreased from 3 to 2 days. CONCLUSION: This study demonstrates that ESPAL is cost-saving as first-line therapy for posterior epistaxis. Given the improved effectiveness and patient comfort of ESPAL compared to posterior packing, ESPAL should be offered as an initial treatment option for medically stable patients with posterior epistaxis.
Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice.
Role of Reduced-Intensity Conditioning Allogeneic Hematopoietic Stem-Cell Transplantation in Older Patients With De Novo Myelodysplastic Syndromes: An International Collaborative Decision Analysis
- Journal of clinical oncology : official journal of the American Society of Clinical Oncology
- Published over 6 years ago
PURPOSEMyelodysplastic syndromes (MDS) are clonal hematopoietic disorders that are more common in patients aged ≥ 60 years and are incurable with conventional therapies. Reduced-intensity conditioning (RIC) allogeneic hematopoietic stem-cell transplantation is potentially curative but has additional mortality risk. We evaluated RIC transplantation versus nontransplantation therapies in older patients with MDS stratified by International Prognostic Scoring System (IPSS) risk. PATIENTS AND METHODSA Markov decision model with quality-of-life utility estimates for different MDS and transplantation states was assessed. Outcomes were life expectancy (LE) and quality-adjusted life expectancy (QALE). A total of 514 patients with de novo MDS aged 60 to 70 years were evaluated. Chronic myelomonocytic leukemia, isolated 5q- syndrome, unclassifiable, and therapy-related MDS were excluded. Transplantation using T-cell depletion or HLA-mismatched or umbilical cord donors was also excluded. RIC transplantation (n = 132) stratified by IPSS risk was compared with best supportive care for patients with nonanemic low/intermediate-1 IPSS (n = 123), hematopoietic growth factors for patients with anemic low/intermediate-1 IPSS (n = 94), and hypomethylating agents for patients with intermediate-2/high IPSS (n = 165).ResultsFor patients with low/intermediate-1 IPSS MDS, RIC transplantation LE was 38 months versus 77 months with nontransplantation approaches. QALE and sensitivity analysis did not favor RIC transplantation across plausible utility estimates. For intermediate-2/high IPSS MDS, RIC transplantation LE was 36 months versus 28 months for nontransplantation therapies. QALE and sensitivity analysis favored RIC transplantation across plausible utility estimates. CONCLUSIONFor patients with de novo MDS aged 60 to 70 years, favored treatments vary with IPSS risk. For low/intermediate-1 IPSS, nontransplantation approaches are preferred. For intermediate-2/high IPSS, RIC transplantation offers overall and quality-adjusted survival benefit.
The impact of anthropogenic activity on ecosystems has highlighted the need to move beyond the biogeographical delineation of species richness patterns to understanding the vulnerability of species assemblages, including the functional components that are linked to the processes they support. We developed a decision theory framework to quantitatively assess the global taxonomic and functional vulnerability of fish assemblages on tropical reefs using a combination of sensitivity to species loss, exposure to threats and extent of protection. Fish assemblages with high taxonomic and functional sensitivity are often exposed to threats but are largely missed by the global network of marine protected areas. We found that areas of high species richness spatially mismatch areas of high taxonomic and functional vulnerability. Nevertheless, there is strong spatial match between taxonomic and functional vulnerabilities suggesting a potential win-win conservation-ecosystem service strategy if more protection is set in these locations.
This objective of this study is to develop a generic multi-attribute decision analysis framework for ranking the technologies for ballast water treatment and determine their grades. An evaluation criteria system consisting of eight criteria in four categories was used to evaluate the technologies for ballast water treatment. The Best-Worst method, which is a subjective weighting method and Criteria importance through inter-criteria correlation method, which is an objective weighting method, were combined to determine the weights of the evaluation criteria. The extension theory was employed to prioritize the technologies for ballast water treatment and determine their grades. An illustrative case including four technologies for ballast water treatment, i.e. Alfa Laval (T1), Hyde (T2), Unitor (T3), and NaOH (T4), were studied by the proposed method, and the Hyde (T2) was recognized as the best technology. Sensitivity analysis was also carried to investigate the effects of the combined coefficients and the weights of the evaluation criteria on the final priority order of the four technologies for ballast water treatment. The sum weighted method and the TOPSIS was also employed to rank the four technologies, and the results determined by these two methods are consistent to that determined by the proposed method in this study.
The study aimed to estimate the budget impact of GeneXpert MTB/RIF for diagnosis of tuberculosis from the perspective of the Brazilian National Program for Tuberculosis Control, drawing on a static model using the epidemiological method, from 2013 to 2017. GeneXpert MTB/RIF was compared with two diagnostic sputum smear tests. The study used epidemiological, population, and cost data, exchange rates, and databases from the Brazilian Unified National Health System. Sensitivity analysis of scenarios was performed. Incorporation of GeneXpert MTB/RIF would cost BRL 147 million (roughly USD 45 million) in five years and would have an impact of 23 to 26% in the first two years and some 11% between 2015 and 2017. The results can support Brazilian and other Latin American health administrators in planning and managing the decision on incorporating the technology.
Deep Uncertainties in Sea-Level Rise and Storm Surge Projections: Implications for Coastal Flood Risk Management
- Risk analysis : an official publication of the Society for Risk Analysis
- Published about 2 years ago
Sea levels are rising in many areas around the world, posing risks to coastal communities and infrastructures. Strategies for managing these flood risks present decision challenges that require a combination of geophysical, economic, and infrastructure models. Previous studies have broken important new ground on the considerable tensions between the costs of upgrading infrastructure and the damages that could result from extreme flood events. However, many risk-based adaptation strategies remain silent on certain potentially important uncertainties, as well as the tradeoffs between competing objectives. Here, we implement and improve on a classic decision-analytical model (Van Dantzig 1956) to: (i) capture tradeoffs across conflicting stakeholder objectives, (ii) demonstrate the consequences of structural uncertainties in the sea-level rise and storm surge models, and (iii) identify the parametric uncertainties that most strongly influence each objective using global sensitivity analysis. We find that the flood adaptation model produces potentially myopic solutions when formulated using traditional mean-centric decision theory. Moving from a single-objective problem formulation to one with multiobjective tradeoffs dramatically expands the decision space, and highlights the need for compromise solutions to address stakeholder preferences. We find deep structural uncertainties that have large effects on the model outcome, with the storm surge parameters accounting for the greatest impacts. Global sensitivity analysis effectively identifies important parameter interactions that local methods overlook, and that could have critical implications for flood adaptation strategies.
Discrimination tests are used in food companies to quantify small differences between products. Within the diversity of methods available, some are quicker to conduct, whereas others are more sensitive or statistically powerful. One class of methods includes the reminder tasks in which the reference product is given before tasting the actual test stimuli. During the task, such a ‘reminder’ can be compared directly to each test stimulus, or alternatively, only serve to prime the memory of the judge without being taken into account in decision-making. Previous research with trained judges provided evidence for the latter process while research with untrained consumers has provided some evidence for the former process. Two studies were conducted with untrained consumers using the A Not-AR and 2-AFCR reminder tasks. Objectives were to determine the decision strategies used in, and the relative sensitivity of the tasks. In addition, the use of an “authenticity test” was explored to see if this has a positive effect on test performance. In the first study, mayonnaise and ice tea with small stimulus differences (d'<1) were used in A Not-AR and 2-AFCR. Results were compared to those from A Not-A and 2-AFC tasks, with and without an authenticity test. It was difficult to draw clear conclusions on the decision strategy used, though the use of an authenticity test increased the sensitivity for these small differences, as it improved the performance of 6 out of 8 tests. In the second study, ice teas with larger stimulus differences (at two levels) were tested using the A Not-AR and 2-AFCR tasks, in comparison to the same-different task. The results showed that consumers use the less optimal strategies and that the authenticity test decreases performance, which is contradictory to the results of the first study. It seems that for very small stimulus differences the authenticity test can improve performance, but with larger differences the authenticity test decreases performance; it seems to confuse the judges.
Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC), and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary Computer vision was used to automatically estimate exertion time, duty cycle, and hand activity level from videos of workers performing industrial tasks.