Discover the most talked about and latest scientific content & concepts.

Journal: NPJ digital medicine


The use of apps that record detailed menstrual cycle data presents a new opportunity to study the menstrual cycle. The aim of this study is to describe menstrual cycle characteristics observed from a large database of cycles collected through an app and investigate associations of menstrual cycle characteristics with cycle length, age and body mass index (BMI). Menstrual cycle parameters, including menstruation, basal body temperature (BBT) and luteinising hormone (LH) tests as well as age and BMI were collected anonymously from real-world users of the Natural Cycles app. We analysed 612,613 ovulatory cycles with a mean length of 29.3 days from 124,648 users. The mean follicular phase length was 16.9 days (95% CI: 10-30) and mean luteal phase length was 12.4 days (95% CI: 7-17). Mean cycle length decreased by 0.18 days (95% CI: 0.17-0.18, R2 = 0.99) and mean follicular phase length decreased by 0.19 days (95% CI: 0.19-0.20, R2 = 0.99) per year of age from 25 to 45 years. Mean variation of cycle length per woman was 0.4 days or 14% higher in women with a BMI of over 35 relative to women with a BMI of 18.5-25. This analysis details variations in menstrual cycle characteristics that are not widely known yet have significant implications for health and well-being. Clinically, women who wish to plan a pregnancy need to have intercourse on their fertile days. In order to identify the fertile period it is important to track physiological parameters such as basal body temperature and not just cycle length.


For most women of reproductive age, assessing menstrual health and fertility typically involves regular visits to a gynecologist or another clinician. While these evaluations provide critical information on an individual’s reproductive health status, they typically rely on memory-based self-reports, and the results are rarely, if ever, assessed at the population level. In recent years, mobile apps for menstrual tracking have become very popular, allowing us to evaluate the reliability and tracking frequency of millions of self-observations, thereby providing an unparalleled view, both in detail and scale, on menstrual health and its evolution for large populations. In particular, the primary aim of this study was to describe the tracking behavior of the app users and their overall observation patterns in an effort to understand if they were consistent with previous small-scale medical studies. The secondary aim was to investigate whether their precision allowed the detection and estimation of ovulation timing, which is critical for reproductive and menstrual health. Retrospective self-observation data were acquired from two mobile apps dedicated to the application of the sympto-thermal fertility awareness method, resulting in a dataset of more than 30 million days of observations from over 2.7 million cycles for two hundred thousand users. The analysis of the data showed that up to 40% of the cycles in which users were seeking pregnancy had recordings every single day. With a modeling approach using Hidden Markov Models to describe the collected data and estimate ovulation timing, it was found that follicular phases average duration and range were larger than previously reported, with only 24% of ovulations occurring at cycle days 14 to 15, while the luteal phase duration and range were in line with previous reports, although short luteal phases (10 days or less) were more frequently observed (in up to 20% of cycles). The digital epidemiology approach presented here can help to lead to a better understanding of menstrual health and its connection to women’s health overall, which has historically been severely understudied.


We developed a digitally enabled care pathway for acute kidney injury (AKI) management incorporating a mobile detection application, specialist clinical response team and care protocol. Clinical outcome data were collected from adults with AKI on emergency admission before (May 2016 to January 2017) and after (May to September 2017) deployment at the intervention site and another not receiving the intervention. Changes in primary outcome (serum creatinine recovery to ≤120% baseline at hospital discharge) and secondary outcomes (30-day survival, renal replacement therapy, renal or intensive care unit (ICU) admission, worsening AKI stage and length of stay) were measured using interrupted time-series regression. Processes of care data (time to AKI recognition, time to treatment) were extracted from casenotes, and compared over two 9-month periods before and after implementation (January to September 2016 and 2017, respectively) using pre-post analysis. There was no step change in renal recovery or any of the secondary outcomes. Trends for creatinine recovery rates (estimated odds ratio (OR) = 1.04, 95% confidence interval (95% CI): 1.00-1.08, p = 0.038) and renal or ICU admission (OR = 0.95, 95% CI: 0.90-1.00, p = 0.044) improved significantly at the intervention site. However, difference-in-difference analyses between sites for creatinine recovery (estimated OR = 0.95, 95% CI: 0.90-1.00, p = 0.053) and renal or ICU admission (OR = 1.06, 95% CI: 0.98-1.16, p = 0.140) were not significant. Among process measures, time to AKI recognition and treatment of nephrotoxicity improved significantly (p < 0.001 and 0.047 respectively).


Digital data are anticipated to transform medicine. However, most of today’s medical data lack interoperability: hidden in isolated databases, incompatible systems and proprietary software, the data are difficult to exchange, analyze, and interpret. This slows down medical progress, as technologies that rely on these data - artificial intelligence, big data or mobile applications - cannot be used to their full potential. In this article, we argue that interoperability is a prerequisite for the digital innovations envisioned for future medicine. We focus on four areas where interoperable data and IT systems are particularly important: (1) artificial intelligence and big data; (2) medical communication; (3) research; and (4) international cooperation. We discuss how interoperability can facilitate digital transformation in these areas to improve the health and well-being of patients worldwide.


Biomarkers are physiologic, pathologic, or anatomic characteristics that are objectively measured and evaluated as an indicator of normal biologic processes, pathologic processes, or biological responses to therapeutic interventions. Recent advances in the development of mobile digitally connected technologies have led to the emergence of a new class of biomarkers measured across multiple layers of hardware and software. Quantified in ones and zeros, these “digital” biomarkers can support continuous measurements outside the physical confines of the clinical environment. The modular software-hardware combination of these products has created new opportunities for patient care and biomedical research, enabling remote monitoring and decentralized clinical trial designs. However, a systematic approach to assessing the quality and utility of digital biomarkers to ensure an appropriate balance between their safety and effectiveness is needed. This paper outlines key considerations for the development and evaluation of digital biomarkers, examining their role in clinical research and routine patient care.


Future of clinical development is on the verge of a major transformation due to convergence of large new digital data sources, computing power to identify clinically meaningful patterns in the data using efficient artificial intelligence and machine-learning algorithms, and regulators embracing this change through new collaborations. This perspective summarizes insights, recent developments, and recommendations for infusing actionable computational evidence into clinical development and health care from academy, biotechnology industry, nonprofit foundations, regulators, and technology corporations. Analysis and learning from publically available biomedical and clinical trial data sets, real-world evidence from sensors, and health records by machine-learning architectures are discussed. Strategies for modernizing the clinical development process by integration of AI- and ML-based digital methods and secure computing technologies through recently announced regulatory pathways at the United States Food and Drug Administration are outlined. We conclude by discussing applications and impact of digital algorithmic evidence to improve medical care for patients.


The occurrence of drug-drug-interactions (DDI) from multiple drug dispensations is a serious problem, both for individuals and health-care systems, since patients with complications due to DDI are likely to reenter the system at a costlier level. We present a large-scale longitudinal study (18 months) of the DDI phenomenon at the primary- and secondary-care level using electronic health records (EHR) from the city of Blumenau in Southern Brazil (pop. ≈340,000). We found that 181 distinct drug pairs known to interact were dispensed concomitantly to 12% of the patients in the city’s public health-care system. Further, 4% of the patients were dispensed drug pairs that are likely to result in major adverse drug reactions (ADR)-with costs estimated to be much larger than previously reported in smaller studies. The large-scale analysis reveals that women have a 60% increased risk of DDI as compared to men; the increase becomes 90% when considering only DDI known to lead to major ADR. Furthermore, DDI risk increases substantially with age; patients aged 70-79 years have a 34% risk of DDI when they are dispensed two or more drugs concomitantly. Interestingly, a statistical null model demonstrates that age- and female-specific risks from increased polypharmacy fail by far to explain the observed DDI risks in those populations, suggesting unknown social or biological causes. We also provide a network visualization of drugs and demographic factors that characterize the DDI phenomenon and demonstrate that accurate DDI prediction can be included in health care and public-health management, to reduce DDI-related ADR and costs.


Technologies leveraging big data, including predictive algorithms and machine learning, are playing an increasingly important role in the delivery of healthcare. However, evidence indicates that such algorithms have the potential to worsen disparities currently intrinsic to the contemporary healthcare system, including racial biases. Blame for these deficiencies has often been placed on the algorithm-but the underlying training data bears greater responsibility for these errors, as biased outputs are inexorably produced by biased inputs. The utility, equity, and generalizability of predictive models depend on population-representative training data with robust feature sets. So while the conventional paradigm of big data is deductive in nature-clinical decision support-a future model harnesses the potential of big data for inductive reasoning. This may be conceptualized as clinical decision questioning, intended to liberate the human predictive process from preconceived lenses in data solicitation and/or interpretation. Efficacy, representativeness and generalizability are all heightened in this schema. Thus, the possible risks of biased big data arising from the inputs themselves must be acknowledged and addressed. Awareness of data deficiencies, structures for data inclusiveness, strategies for data sanitation, and mechanisms for data correction can help realize the potential of big data for a personalized medicine era. Applied deliberately, these considerations could help mitigate risks of perpetuation of health inequity amidst widespread adoption of novel applications of big data.


The use of data generated passively by personal electronic devices, such as smartphones, to measure human function in health and disease has generated significant research interest. Particularly in psychiatry, objective, continuous quantitation using patients' own devices may result in clinically useful markers that can be used to refine diagnostic processes, tailor treatment choices, improve condition monitoring for actionable outcomes, such as early signs of relapse, and develop new intervention models. If a principal goal for digital phenotyping is clinical improvement, research needs to attend now to factors that will help or hinder future clinical adoption. We identify four opportunities for research directed toward this goal: exploring intermediate outcomes and underlying disease mechanisms; focusing on purposes that are likely to be used in clinical practice; anticipating quality and safety barriers to adoption; and exploring the potential for digital personalized medicine arising from the integration of digital phenotyping and digital interventions. Clinical relevance also means explicitly addressing consumer needs, preferences, and acceptability as the ultimate users of digital phenotyping interventions. There is a risk that, without such considerations, the potential benefits of digital phenotyping are delayed or not realized because approaches that are feasible for application in healthcare, and the evidence required to support clinical commissioning, are not developed. Practical steps to accelerate this research agenda include the further development of digital phenotyping technology platforms focusing on scalability and equity, establishing shared data repositories and common data standards, and fostering multidisciplinary collaborations between clinical stakeholders (including patients), computer scientists, and researchers.


More than 400,000 deaths from severe malaria (SM) are reported every year, mainly in African children. The diversity of clinical presentations associated with SM indicates important differences in disease pathogenesis that require specific treatment, and this clinical heterogeneity of SM remains poorly understood. Here, we apply tools from machine learning and model-based inference to harness large-scale data and dissect the heterogeneity in patterns of clinical features associated with SM in 2904 Gambian children admitted to hospital with malaria. This quantitative analysis reveals features predicting the severity of individual patient outcomes, and the dynamic pathways of SM progression, notably inferred without requiring longitudinal observations. Bayesian inference of these pathways allows us assign quantitative mortality risks to individual patients. By independently surveying expert practitioners, we show that this data-driven approach agrees with and expands the current state of knowledge on malaria progression, while simultaneously providing a data-supported framework for predicting clinical risk.