Journal: NPJ digital medicine
Patients with chronic pain commonly believe their pain is related to the weather. Scientific evidence to support their beliefs is inconclusive, in part due to difficulties in getting a large dataset of patients frequently recording their pain symptoms during a variety of weather conditions. Smartphones allow the opportunity to collect data to overcome these difficulties. Our study Cloudy with a Chance of Pain analysed daily data from 2658 patients collected over a 15-month period. The analysis demonstrated significant yet modest relationships between pain and relative humidity, pressure and wind speed, with correlations remaining even when accounting for mood and physical activity. This research highlights how citizen-science experiments can collect large datasets on real-world populations to address long-standing health questions. These results will act as a starting point for a future system for patients to better manage their health through pain forecasts.
The use of apps that record detailed menstrual cycle data presents a new opportunity to study the menstrual cycle. The aim of this study is to describe menstrual cycle characteristics observed from a large database of cycles collected through an app and investigate associations of menstrual cycle characteristics with cycle length, age and body mass index (BMI). Menstrual cycle parameters, including menstruation, basal body temperature (BBT) and luteinising hormone (LH) tests as well as age and BMI were collected anonymously from real-world users of the Natural Cycles app. We analysed 612,613 ovulatory cycles with a mean length of 29.3 days from 124,648 users. The mean follicular phase length was 16.9 days (95% CI: 10-30) and mean luteal phase length was 12.4 days (95% CI: 7-17). Mean cycle length decreased by 0.18 days (95% CI: 0.17-0.18, R2 = 0.99) and mean follicular phase length decreased by 0.19 days (95% CI: 0.19-0.20, R2 = 0.99) per year of age from 25 to 45 years. Mean variation of cycle length per woman was 0.4 days or 14% higher in women with a BMI of over 35 relative to women with a BMI of 18.5-25. This analysis details variations in menstrual cycle characteristics that are not widely known yet have significant implications for health and well-being. Clinically, women who wish to plan a pregnancy need to have intercourse on their fertile days. In order to identify the fertile period it is important to track physiological parameters such as basal body temperature and not just cycle length.
For most women of reproductive age, assessing menstrual health and fertility typically involves regular visits to a gynecologist or another clinician. While these evaluations provide critical information on an individual’s reproductive health status, they typically rely on memory-based self-reports, and the results are rarely, if ever, assessed at the population level. In recent years, mobile apps for menstrual tracking have become very popular, allowing us to evaluate the reliability and tracking frequency of millions of self-observations, thereby providing an unparalleled view, both in detail and scale, on menstrual health and its evolution for large populations. In particular, the primary aim of this study was to describe the tracking behavior of the app users and their overall observation patterns in an effort to understand if they were consistent with previous small-scale medical studies. The secondary aim was to investigate whether their precision allowed the detection and estimation of ovulation timing, which is critical for reproductive and menstrual health. Retrospective self-observation data were acquired from two mobile apps dedicated to the application of the sympto-thermal fertility awareness method, resulting in a dataset of more than 30 million days of observations from over 2.7 million cycles for two hundred thousand users. The analysis of the data showed that up to 40% of the cycles in which users were seeking pregnancy had recordings every single day. With a modeling approach using Hidden Markov Models to describe the collected data and estimate ovulation timing, it was found that follicular phases average duration and range were larger than previously reported, with only 24% of ovulations occurring at cycle days 14 to 15, while the luteal phase duration and range were in line with previous reports, although short luteal phases (10 days or less) were more frequently observed (in up to 20% of cycles). The digital epidemiology approach presented here can help to lead to a better understanding of menstrual health and its connection to women’s health overall, which has historically been severely understudied.
The global burden of diabetic retinopathy (DR) continues to worsen and DR remains a leading cause of vision loss worldwide. Here, we describe an algorithm to predict DR progression by means of deep learning (DL), using as input color fundus photographs (CFPs) acquired at a single visit from a patient with DR. The proposed DL models were designed to predict future DR progression, defined as 2-step worsening on the Early Treatment Diabetic Retinopathy Diabetic Retinopathy Severity Scale, and were trained against DR severity scores assessed after 6, 12, and 24 months from the baseline visit by masked, well-trained, human reading center graders. The performance of one of these models (prediction at month 12) resulted in an area under the curve equal to 0.79. Interestingly, our results highlight the importance of the predictive signal located in the peripheral retinal fields, not routinely collected for DR assessments, and the importance of microvascular abnormalities. Our findings show the feasibility of predicting future DR progression by leveraging CFPs of a patient acquired at a single visit. Upon further development on larger and more diverse datasets, such an algorithm could enable early diagnosis and referral to a retina specialist for more frequent monitoring and even consideration of early intervention. Moreover, it could also improve patient recruitment for clinical trials targeting DR.
Digital data are anticipated to transform medicine. However, most of today’s medical data lack interoperability: hidden in isolated databases, incompatible systems and proprietary software, the data are difficult to exchange, analyze, and interpret. This slows down medical progress, as technologies that rely on these data - artificial intelligence, big data or mobile applications - cannot be used to their full potential. In this article, we argue that interoperability is a prerequisite for the digital innovations envisioned for future medicine. We focus on four areas where interoperable data and IT systems are particularly important: (1) artificial intelligence and big data; (2) medical communication; (3) research; and (4) international cooperation. We discuss how interoperability can facilitate digital transformation in these areas to improve the health and well-being of patients worldwide.
We developed a digitally enabled care pathway for acute kidney injury (AKI) management incorporating a mobile detection application, specialist clinical response team and care protocol. Clinical outcome data were collected from adults with AKI on emergency admission before (May 2016 to January 2017) and after (May to September 2017) deployment at the intervention site and another not receiving the intervention. Changes in primary outcome (serum creatinine recovery to ≤120% baseline at hospital discharge) and secondary outcomes (30-day survival, renal replacement therapy, renal or intensive care unit (ICU) admission, worsening AKI stage and length of stay) were measured using interrupted time-series regression. Processes of care data (time to AKI recognition, time to treatment) were extracted from casenotes, and compared over two 9-month periods before and after implementation (January to September 2016 and 2017, respectively) using pre-post analysis. There was no step change in renal recovery or any of the secondary outcomes. Trends for creatinine recovery rates (estimated odds ratio (OR) = 1.04, 95% confidence interval (95% CI): 1.00-1.08, p = 0.038) and renal or ICU admission (OR = 0.95, 95% CI: 0.90-1.00, p = 0.044) improved significantly at the intervention site. However, difference-in-difference analyses between sites for creatinine recovery (estimated OR = 0.95, 95% CI: 0.90-1.00, p = 0.053) and renal or ICU admission (OR = 1.06, 95% CI: 0.98-1.16, p = 0.140) were not significant. Among process measures, time to AKI recognition and treatment of nephrotoxicity improved significantly (p < 0.001 and 0.047 respectively).
Biomarkers are physiologic, pathologic, or anatomic characteristics that are objectively measured and evaluated as an indicator of normal biologic processes, pathologic processes, or biological responses to therapeutic interventions. Recent advances in the development of mobile digitally connected technologies have led to the emergence of a new class of biomarkers measured across multiple layers of hardware and software. Quantified in ones and zeros, these “digital” biomarkers can support continuous measurements outside the physical confines of the clinical environment. The modular software-hardware combination of these products has created new opportunities for patient care and biomedical research, enabling remote monitoring and decentralized clinical trial designs. However, a systematic approach to assessing the quality and utility of digital biomarkers to ensure an appropriate balance between their safety and effectiveness is needed. This paper outlines key considerations for the development and evaluation of digital biomarkers, examining their role in clinical research and routine patient care.
Future of clinical development is on the verge of a major transformation due to convergence of large new digital data sources, computing power to identify clinically meaningful patterns in the data using efficient artificial intelligence and machine-learning algorithms, and regulators embracing this change through new collaborations. This perspective summarizes insights, recent developments, and recommendations for infusing actionable computational evidence into clinical development and health care from academy, biotechnology industry, nonprofit foundations, regulators, and technology corporations. Analysis and learning from publically available biomedical and clinical trial data sets, real-world evidence from sensors, and health records by machine-learning architectures are discussed. Strategies for modernizing the clinical development process by integration of AI- and ML-based digital methods and secure computing technologies through recently announced regulatory pathways at the United States Food and Drug Administration are outlined. We conclude by discussing applications and impact of digital algorithmic evidence to improve medical care for patients.
A systematic analysis of Hospital Episodes Statistics (HES) data was done to determine the effects of the 2017 WannaCry attack on the National Health Service (NHS) by identifying the missed appointments, deaths, and fiscal costs attributable to the ransomware attack. The main outcomes measured were: outpatient appointments cancelled, elective and emergency admissions to hospitals, accident and emergency (A&E) attendances, and deaths in A&E. Compared with the baseline, there was no significant difference in the total activity across all trusts during the week of the WannaCry attack. Trusts had 1% more emergency admissions and 1% fewer A&E attendances per day during the WannaCry week compared with baseline. Hospitals directly infected with the ransomware, however, had significantly fewer emergency and elective admissions: a decrease of about 6% in total admissions per infected hospital per day was observed, with 4% fewer emergency admissions and 9% fewer elective admissions. No difference in mortality was noted. The total economic value of the lower activity at the infected trusts during this time was £5.9 m including £4 m in lost inpatient admissions, £0.6 m from lost A&E activity, and £1.3 m from cancelled outpatient appointments. Among hospitals infected with WannaCry ransomware, there was a significant decrease in the number of attendances and admissions, which corresponded to £5.9 m in lost hospital activity. There was no increase in mortality reported, though this is a crude measure of patient harm. Further work is needed to appreciate the impact of a cyberattack or IT failure on care delivery and patient safety.
The occurrence of drug-drug-interactions (DDI) from multiple drug dispensations is a serious problem, both for individuals and health-care systems, since patients with complications due to DDI are likely to reenter the system at a costlier level. We present a large-scale longitudinal study (18 months) of the DDI phenomenon at the primary- and secondary-care level using electronic health records (EHR) from the city of Blumenau in Southern Brazil (pop. ≈340,000). We found that 181 distinct drug pairs known to interact were dispensed concomitantly to 12% of the patients in the city’s public health-care system. Further, 4% of the patients were dispensed drug pairs that are likely to result in major adverse drug reactions (ADR)-with costs estimated to be much larger than previously reported in smaller studies. The large-scale analysis reveals that women have a 60% increased risk of DDI as compared to men; the increase becomes 90% when considering only DDI known to lead to major ADR. Furthermore, DDI risk increases substantially with age; patients aged 70-79 years have a 34% risk of DDI when they are dispensed two or more drugs concomitantly. Interestingly, a statistical null model demonstrates that age- and female-specific risks from increased polypharmacy fail by far to explain the observed DDI risks in those populations, suggesting unknown social or biological causes. We also provide a network visualization of drugs and demographic factors that characterize the DDI phenomenon and demonstrate that accurate DDI prediction can be included in health care and public-health management, to reduce DDI-related ADR and costs.