Concept: The Conclusion
In the HPTN 052 study, transmission between HIV-discordant couples was reduced by 96% when the HIV-infected partner received suppressive antiretroviral therapy (ART). We examined two transmission events where the newly infected partner was diagnosed after the HIV-infected partner (index) initiated therapy. We evaluated the sequence complexity of the viral populations and antibody reactivity in the newly infected partner to estimate the dates of transmission to the newly infected partners. In both cases, transmission most likely occurred significantly before HIV-1 diagnosis of the newly infected partner, and either just before the initiation of therapy or before viral replication was adequately suppressed by therapy of the index. This study further strengthens the conclusion about the efficacy of blocking transmission by treating the infected partner of discordant couples. However, this study does not rule out the potential for HIV-1 transmission to occur shortly after initiation of ART, and this should be recognized when antiretroviral therapy is used for HIV-1 prevention.
We describe the kinetics of Zika virus (ZIKV) detection in serum and urine samples of 6 patients. Urine samples were positive for ZIKV >10 days after onset of disease, which was a notably longer period than for serum samples. This finding supports the conclusion that urine samples are useful for diagnosis of ZIKV infections.
- World psychiatry : official journal of the World Psychiatric Association (WPA)
- Published over 3 years ago
The common factors have a long history in the field of psychotherapy theory, research and practice. To understand the evidence supporting them as important therapeutic elements, the contextual model of psychotherapy is outlined. Then the evidence, primarily from meta-analyses, is presented for particular common factors, including alliance, empathy, expectations, cultural adaptation, and therapist differences. Then the evidence for four factors related to specificity, including treatment differences, specific ingredients, adherence, and competence, is presented. The evidence supports the conclusion that the common factors are important for producing the benefits of psychotherapy.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 2 years ago
People use more positive words than negative words. Referred to as “linguistic positivity bias” (LPB), this effect has been found across cultures and languages, prompting the conclusion that it is a panhuman tendency. However, although multiple competing explanations of LPB have been proposed, there is still no consensus on what mechanism(s) generate LPB or even on whether it is driven primarily by universal cognitive features or by environmental factors. In this work we propose that LPB has remained unresolved because previous research has neglected an essential dimension of language: time. In four studies conducted with two independent, time-stamped text corpora (Google books Ngrams and the New York Times), we found that LPB in American English has decreased during the last two centuries. We also observed dynamic fluctuations in LPB that were predicted by changes in objective environment, i.e., war and economic hardships, and by changes in national subjective happiness. In addition to providing evidence that LPB is a dynamic phenomenon, these results suggest that cognitive mechanisms alone cannot account for the observed dynamic fluctuations in LPB. At the least, LPB likely arises from multiple interacting mechanisms involving subjective, objective, and societal factors. In addition to having theoretical significance, our results demonstrate the value of newly available data sources in addressing long-standing scientific questions.
This paper presents the findings of the Belmont Forum’s survey on Open Data which targeted the global environmental research and data infrastructure community. It highlights users' perceptions of the term “open data”, expectations of infrastructure functionalities, and barriers and enablers for the sharing of data. A wide range of good practice examples was pointed out by the respondents which demonstrates a substantial uptake of data sharing through e-infrastructures and a further need for enhancement and consolidation. Among all policy responses, funder policies seem to be the most important motivator. This supports the conclusion that stronger mandates will strengthen the case for data sharing.
The activation mode of the mechanosensitive ion channel, MscL, by lysophosphatidylcholine differs from tension-induced gating
- FASEB journal : official publication of the Federation of American Societies for Experimental Biology
- Published almost 5 years ago
One of the best-studied mechanosensitive channels is the mechanosensitive channel of large conductance (MscL). MscL senses tension in the membrane evoked by an osmotic down shock and directly couples it to large conformational changes leading to the opening of the channel. Spectroscopic techniques offer unique possibilities to monitor these conformational changes if it were possible to generate tension in the lipid bilayer, the native environment of MscL, during the measurements. To this end, asymmetric insertion of l-α-lysophosphatidylcholine (LPC) into the lipid bilayer has been effective; however, how LPC activates MscL is not fully understood. Here, the effects of LPC on tension-sensitive mutants of a bacterial MscL and on MscL homologs with different tension sensitivities are reported, leading to the conclusion that the mode of action of LPC is different from that of applied tension. Our results imply that LPC shifts the free energy of gating by interfering with MscL-membrane coupling. Furthermore, we demonstrate that the fine-tuned addition of LPC can be used for controlled activation of MscL in spectroscopic studies.-Mukherjee, N., Jose, M. D., Birkner, J. P., Walko, M., Ingólfsson, H. I., Dimitrova, A., Arnarez, C., Marrink, S. J., Koçer, A. The activation mode of the mechanosensitive ion channel, MscL, by lysophosphatidylcholine differs from tension-induced gating.
Human footprints provide some of the most publically emotive and tangible evidence of our ancestors. To the scientific community they provide evidence of stature, presence, behaviour and in the case of early hominins potential evidence with respect to the evolution of gait. While rare in the geological record the number of footprint sites has increased in recent years along with the analytical tools available for their study. Many of these sites are at risk from rapid erosion, including the Ileret footprints in northern Kenya which are second only in age to those at Laetoli (Tanzania). Unlithified, soft-sediment footprint sites such these pose a significant geoconservation challenge. In the first part of this paper conservation and preservation options are explored leading to the conclusion that to ‘record and digitally rescue’ provides the only viable approach. Key to such strategies is the increasing availability of three-dimensional data capture either via optical laser scanning and/or digital photogrammetry. Within the discipline there is a developing schism between those that favour one approach over the other and a requirement from geoconservationists and the scientific community for some form of objective appraisal of these alternatives is necessary. Consequently in the second part of this paper we evaluate these alternative approaches and the role they can play in a ‘record and digitally rescue’ conservation strategy. Using modern footprint data, digital models created via optical laser scanning are compared to those generated by state-of-the-art photogrammetry. Both methods give comparable although subtly different results. This data is evaluated alongside a review of field deployment issues to provide guidance to the community with respect to the factors which need to be considered in digital conservation of human/hominin footprints.
The Researching Effective Approaches to Cleaning in Hospitals (REACH) study tested a multimodal cleaning intervention in Australian hospitals. This article reports findings from a pre/post questionnaire, embedded into the REACH study, that was administered prior to the implementation of the intervention and at the conclusion of the study.
Insomnia identity refers to the conviction that one has insomnia, and this sleep complaint can be measured independently of sleep. Conventional wisdom predicts that sleep complaints are synchronous with poor sleep, but crossing the presence or absence of poor sleep with the presence or absence of insomnia identity reveals incongruity with expected patterns. This review of existing research on insomnia identity processes and influence finds that about one-fourth of the population are uncoupled sleepers, meaning there is an uncoupling of sleep and sleep appraisal, and daytime impairment accrues more strongly to those who endorse an insomnia identity. Research supports the conclusion that there is a cost to pathologizing sleep. Individuals claiming an insomnia identity, regardless of sleep status, are at greater risk for a range of sequelae including self-stigma, depression, suicidal ideation, anxiety, hypertension, and fatigue. A broad research agenda is proposed with hypotheses about the sources, clinical mechanisms, and clinical management of insomnia identity.
Head et al. (2015) provided a large collection of p-values that, from their perspective, indicates widespread statistical significance seeking (i.e., p-hacking). This paper inspects this result for robustness. Theoretically, the p-value distribution should be a smooth, decreasing function, but the distribution of reported p-values shows systematically more reported p-values for .01, .02, .03, .04, and .05 than p-values reported to three decimal places, due to apparent tendencies to round p-values to two decimal places. Head et al. (2015) correctly argue that an aggregate p-value distribution could show a bump below .05 when left-skew p-hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p-values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher’s method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p-hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p-values. Moreover, inspecting the bins that include two-decimal reported p-values potentially increases sensitivity if strategic rounding down of p-values as a form of p-hacking is widespread. Given the far-reaching implications of supposed widespread p-hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p-hacking is found in this reanalysis, this does not mean that there is no p-hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p-hacking is ambiguous at best.