In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 5 years ago
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.
Emotion is a primary motivator for creative behaviors, yet the interaction between the neural systems involved in creativity and those involved in emotion has not been studied. In the current study, we addressed this gap by using fMRI to examine piano improvisation in response to emotional cues. We showed twelve professional jazz pianists photographs of an actress representing a positive, negative or ambiguous emotion. Using a non-ferromagnetic thirty-five key keyboard, the pianists improvised music that they felt represented the emotion expressed in the photographs. Here we show that activity in prefrontal and other brain networks involved in creativity is highly modulated by emotional context. Furthermore, emotional intent directly modulated functional connectivity of limbic and paralimbic areas such as the amygdala and insula. These findings suggest that emotion and creativity are tightly linked, and that the neural mechanisms underlying creativity may depend on emotional state.
It is now standard practice, at Universities around the world, for academics to place pictures of themselves on a personal profile page maintained as part of their University’s web-site. Here we investigated what these pictures reveal about the way academics see themselves. Since there is an asymmetry in the degree to which emotional information is conveyed by the face, with the left side being more expressive than the right, we hypothesised that academics in the sciences would seek to pose as non-emotional rationalists and put their right cheek forward, while academics in the arts would express their emotionality and pose with the left cheek forward. We sourced 5829 pictures of academics from their University websites and found that, consistent with the hypotheses, there was a significant difference in the direction of face posing between science academics and English academics with English academics showing a more leftward orientation. Academics in the Fine Arts and Performing Arts however, did not show the expected left cheek forward bias. We also analysed profile pictures of psychology academics and found a greater bias toward presenting the left check compared to science academics which makes psychologists appear more like arts academics than scientists. These findings indicate that the personal website pictures of academics mirror the cultural perceptions of emotional expressiveness across disciplines.
Neuroscience is increasingly being called upon to address issues within the humanities. We discuss challenges that arise, relating to art and beauty, and provide ideas for a way forward.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 2 years ago
The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.
Musical performance is a skilled activity performed under intense pressure, thus is often a profound source of anxiety. In other contexts, anxiety and its concomitant symptoms of sympathetic nervous system arousal have been successfully ameliorated with HRV biofeedback (HRV BF), a technique involving slow breathing which augments autonomic and emotional regulatory capacity. Objective: This randomised-controlled study explored the impact of a single 30-minute session of HRV BF on anxiety in response to a highly stressful music performance.
Inspired by a theory of embodied music cognition, we investigate whether music can entrain the speed of beat synchronized walking. If human walking is in synchrony with the beat and all musical stimuli have the same duration and the same tempo, then differences in walking speed can only be the result of music-induced differences in stride length, thus reflecting the vigor or physical strength of the movement. Participants walked in an open field in synchrony with the beat of 52 different musical stimuli all having a tempo of 130 beats per minute and a meter of 4 beats. The walking speed was measured as the walked distance during a time interval of 30 seconds. The results reveal that some music is ‘activating’ in the sense that it increases the speed, and some music is ‘relaxing’ in the sense that it decreases the speed, compared to the spontaneous walked speed in response to metronome stimuli. Participants are consistent in their observation of qualitative differences between the relaxing and activating musical stimuli. Using regression analysis, it was possible to set up a predictive model using only four sonic features that explain 60% of the variance. The sonic features capture variation in loudness and pitch patterns at periods of three, four and six beats, suggesting that expressive patterns in music are responsible for the effect. The mechanism may be attributed to an attentional shift, a subliminal audio-motor entrainment mechanism, or an arousal effect, but further study is needed to figure this out. Overall, the study supports the hypothesis that recurrent patterns of fluctuation affecting the binary meter strength of the music may entrain the vigor of the movement. The study opens up new perspectives for understanding the relationship between entrainment and expressiveness, with the possibility to develop applications that can be used in domains such as sports and physical rehabilitation.
To further test and explore the hypothesis that synchronous oscillatory brain activity supports interpersonally coordinated behavior during dyadic music performance, we simultaneously recorded the electroencephalogram (EEG) from the brains of each of 12 guitar duets repeatedly playing a modified Rondo in two voices by C.G. Scheidler. Indicators of phase locking and of within-brain and between-brain phase coherence were obtained from complex time-frequency signals based on the Gabor transform. Analyses were restricted to the delta (1-4 Hz) and theta (4-8 Hz) frequency bands. We found that phase locking as well as within-brain and between-brain phase-coherence connection strengths were enhanced at frontal and central electrodes during periods that put particularly high demands on musical coordination. Phase locking was modulated in relation to the experimentally assigned musical roles of leader and follower, corroborating the functional significance of synchronous oscillations in dyadic music performance. Graph theory analyses revealed within-brain and hyperbrain networks with small-worldness properties that were enhanced during musical coordination periods, and community structures encompassing electrodes from both brains (hyperbrain modules). We conclude that brain mechanisms indexed by phase locking, phase coherence, and structural properties of within-brain and hyperbrain networks support interpersonal action coordination (IAC).
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.