- Obesity reviews : an official journal of the International Association for the Study of Obesity
- Published over 5 years ago
The objective of this study was to critically review the empirical evidence from all relevant disciplines regarding obesity stigma in order to (i) determine the implications of obesity stigma for healthcare providers and their patients with obesity and (ii) identify strategies to improve care for patients with obesity. We conducted a search of Medline and PsychInfo for all peer-reviewed papers presenting original empirical data relevant to stigma, bias, discrimination, prejudice and medical care. We then performed a narrative review of the existing empirical evidence regarding the impact of obesity stigma and weight bias for healthcare quality and outcomes. Many healthcare providers hold strong negative attitudes and stereotypes about people with obesity. There is considerable evidence that such attitudes influence person-perceptions, judgment, interpersonal behaviour and decision-making. These attitudes may impact the care they provide. Experiences of or expectations for poor treatment may cause stress and avoidance of care, mistrust of doctors and poor adherence among patients with obesity. Stigma can reduce the quality of care for patients with obesity despite the best intentions of healthcare providers to provide high-quality care. There are several potential intervention strategies that may reduce the impact of obesity stigma on quality of care.
Hitting a baseball is often described as the most difficult thing to do in sports. A key aptitude of a good hitter is the ability to determine which pitch is coming. This rapid decision requires the batter to make a judgment in a fraction of a second based largely on the trajectory and spin of the ball. When does this decision occur relative to the ball’s trajectory and is it possible to identify neural correlates that represent how the decision evolves over a split second? Using single-trial analysis of electroencephalography (EEG) we address this question within the context of subjects discriminating three types of pitches (fastball, curveball, slider) based on pitch trajectories. We find clear neural signatures of pitch classification and, using signal detection theory, we identify the times of discrimination on a trial-to-trial basis. Based on these neural signatures we estimate neural discrimination distributions as a function of the distance the ball is from the plate. We find all three pitches yield unique distributions, namely the timing of the discriminating neural signatures relative to the position of the ball in its trajectory. For instance, fastballs are discriminated at the earliest points in their trajectory, relative to the two other pitches, which is consistent with the need for some constant time to generate and execute the motor plan for the swing (or inhibition of the swing). We also find incorrect discrimination of a pitch (errors) yields neural sources in Brodmann Area 10, which has been implicated in prospective memory, recall, and task difficulty. In summary, we show that single-trial analysis of EEG yields informative distributions of the relative point in a baseball’s trajectory when the batter makes a decision on which pitch is coming.
BACKGROUND: Since the late 1980s, genetic discrimination has remained one of the major concerns associated with genetic research and clinical genetics. Europe has adopted a plethora of laws and policies, both at the regional and national levels, to prevent insurers from having access to genetic information for underwriting. Legislators from the United States and the United Kingdom have also felt compelled to adopt protective measures specifically addressing genetics and insurance. But does the available evidence really confirm the popular apprehension about genetic discrimination and the subsequent genetic exceptionalism? METHODS: This paper presents the results of a systematic, critical review of over 20 years of genetic discrimination studies in the context of life insurance. RESULTS: The available data clearly document the existence of individual cases of genetic discrimination. The significance of this initial finding is, however, greatly diminished by four observations. First, the methodology used in most of the studies is not sufficiently robust to clearly establish either the prevalence or the impact of discriminatory practices. Second, the current body of evidence was mostly developed around a small number of ‘classic’ genetic conditions. Third, the heterogeneity and small scope of most of the studies prevents formal statistical analysis of the aggregate results. Fourth, the small number of reported genetic discrimination cases in some studies could indicate that these incidents took place due to occasional errors, rather than the voluntary or planned choice, of the insurers. CONCLUSION: Important methodological limitations and inconsistencies among the studies considered make it extremely difficult, at the moment, to justify policy action taken on the basis of evidence alone. Nonetheless, other empirical and theoretical factors have emerged (for example, the prevalence and impact of the fear of genetic discrimination among patients and research participants, the importance of genetic information for the commercial viability of the private life insurance industry, and the need to develop more equitable schemes of access to life insurance) that should be considered along with the available evidence of genetic discrimination for a more holistic view of the debate.
The aim of this research was to examine conditions that modify feminists' support for women as targets of gender discrimination. In an experimental study we tested a hypothesis that threatened feminist identity will lead to greater differentiation between feminists and conservative women as victims of discrimination and, in turn, a decrease in support for non-feminist victims. The study was conducted among 96 young Polish female professionals and graduate students from Gender Studies programs in Warsaw who self-identified as feminists (M age = 22.23). Participants were presented with a case of workplace gender discrimination. Threat to feminist identity and worldview of the discrimination victim (feminist vs. conservative) were varied between research conditions. Results indicate that identity threat caused feminists to show conditional reactions to discrimination. Under identity threat, feminists perceived the situation as less discriminatory when the target held conservative views on gender relations than when the target was presented as feminist. This effect was not observed under conditions of no threat. Moreover, feminists showed an increase in compassion for the victim when she was portrayed as a feminist compared to when she was portrayed as conservative. Implications for the feminist movement are discussed.
The mechanosensing ability of lymphocytes regulates their activation in response to antigen stimulation, but the underlying mechanism remains unexplored. Here, we report that B cell mechanosensing-governed activation requires BCR signaling molecules. PMA-induced activation of PKCβ can bypass the Btk and PLC-γ2 signaling molecules that are usually required for B cells to discriminate substrate stiffness. Instead, PKCβ-dependent activation of FAK is required, leading to FAK-mediated potentiation of B cell spreading and adhesion responses. FAK inactivation or deficiency impaired B cell discrimination of substrate stiffness. Conversely, adhesion molecules greatly enhanced this capability of B cells. Lastly, B cells derived from rheumatoid arthritis (RA) patients exhibited an altered BCR response to substrate stiffness in comparison with healthy controls. These results provide a molecular explanation of how initiation of B cell activation discriminates substrate stiffness through a PKCβ-mediated FAK activation dependent manner.
The real-world experiences of young athletes follow a non-linear and dynamic trajectory and there is growing recognition that facing and overcoming a degree of challenge is desirable for aspiring elites and as such, should be recognized and employed. However, there are some misunderstandings of this “talent needs trauma” perspective with some research focusing excessively or incorrectly on the incidence of life and sport challenge as a feature of effective talent development. The objective of the study was to examine what factors associated with such “trauma” experiences may or may not discriminate between high, medium, and low achievers in sport, classified as super-champions, champions or almosts. A series of retrospective interviews were used with matched triads (i.e., super-champions, champions, or almosts) of performers (N = 54) from different sports. Data collection was organized in three phases. In the first phase, a graphic time line of each performer’s career was developed. The second phase explored the specific issues highlighted by each participant in a chronological sequence. The third phase was a retrospective reflection on “traumatic” motivators, coach/significant other inputs and psychological challenges experienced and skills employed. Data suggested qualitative differences between categories of performers, relating to several perceptual and experiential features of their development. No evidence was found for the necessity of major trauma as a feature of development. There was a lack of discrimination across categories of performers associated with the incidence of trauma and, more particularly, life or non-sport trauma. These findings suggest that differences between levels of adult achievement relate more to what performers bring to the challenges than what they experience. A periodized and progressive set of challenge, preceded and associated with specific skill development, would seem to offer the best pathway to success for the majority.
Discrimination is a common experience for Blacks across various developmental periods. Although much is known about the effect of discrimination on suicidal ideation of adults, less is known about the same association in Black youth.
Language and face processing develop in similar ways during the first year of life. Early in the first year of life, infants demonstrate broad abilities for discriminating among faces and speech. These discrimination abilities then become tuned to frequently experienced groups of people or languages. This process of perceptual development occurs between approximately 6 and 12 months of age and is largely shaped by experience. However, the mechanisms underlying perceptual development during this time, and whether they are shared across domains, remain largely unknown. Here, we highlight research findings across domains and propose a top-down/bottom-up processing approach as a guide for future research. It is hypothesized that perceptual narrowing and tuning in development is the result of a shift from primarily bottom-up processing to a combination of bottom-up and top-down influences. In addition, we propose word learning as an important top-down factor that shapes tuning in both the speech and face domains, leading to similar observed developmental trajectories across modalities. Importantly, we suggest that perceptual narrowing/tuning is the result of multiple interacting factors and not explained by the development of a single mechanism.
Telomere length has generated substantial interest as a potential predictor of aging-related diseases and mortality. Some studies have reported significant associations, but few have tested its ability to discriminate between decedents and survivors compared with a broad range of well-established predictors that include both biomarkers and commonly collected self-reported data. Our aim here was to quantify the prognostic value of leukocyte telomere length relative to age, sex, and 19 other variables for predicting five-year mortality among older persons in three countries. We used data from nationally representative surveys in Costa Rica (N = 923, aged 61+), Taiwan (N = 976, aged 54+), and the U.S. (N = 2672, aged 60+). Our study used a prospective cohort design with all-cause mortality during five years post-exam as the outcome. We fit Cox hazards models separately by country, and assessed the discriminatory ability of each predictor. Age was, by far, the single best predictor of all-cause mortality, whereas leukocyte telomere length was only somewhat better than random chance in terms of discriminating between decedents and survivors. After adjustment for age and sex, telomere length ranked between 15th and 17th (out of 20), and its incremental contribution was small; nine self-reported variables (e.g., mobility, global self-assessed health status, limitations with activities of daily living, smoking status), a cognitive assessment, and three biological markers (C-reactive protein, serum creatinine, and glycosylated hemoglobin) were more powerful predictors of mortality in all three countries. Results were similar for cause-specific models (i.e., mortality from cardiovascular disease, cancer, and all other causes combined). Leukocyte telomere length had a statistically discernible, but weak, association with mortality, but it did not predict survival as well as age or many other self-reported variables. Although telomere length may eventually help scientists understand aging, more powerful and more easily obtained tools are available for predicting survival.
Discrimination of and memory for others' generous and selfish behaviors could be adaptive abilities in social animals. Dogs have seemingly expressed such skills in both direct and indirect interactions with humans. However, recent studies suggest that their capacity may rely on cues other than people’s individual characteristics, such as the place where the person stands. Thus, the conditions under which dogs recognize individual humans when solving cooperative tasks still remains unclear. With the aim of contributing to this problem, we made dogs interact with two human experimenters, one generous (pointed towards the food, gave ostensive cues, and allowed the dog to eat it) and the other selfish (pointed towards the food, but ate it before the dog could have it). Then subjects could choose between them (studies 1-3). In study 1, dogs took several training trials to learn the discrimination between the generous and the selfish experimenters when both were of the same gender. In study 2, the discrimination was learned faster when the experimenters were of different gender as evidenced both by dogs' latencies to approach the bowl in training trials as well as by their choices in preference tests. Nevertheless, dogs did not get confused by gender when the experimenters were changed in between the training and the choice phase in study 3. We conclude that dogs spontaneously used human gender as a cue to discriminate between more and less cooperative experimenters. They also relied on some other personal feature which let them avoid being confused by gender when demonstrators were changed. We discuss these results in terms of dogs' ability to recognize individuals and the potential advantage of this skill for their lives in human environments.