Concept: Vocal music
Choir singing is known to promote wellbeing. One reason for this may be that singing demands a slower than normal respiration, which may in turn affect heart activity. Coupling of heart rate variability (HRV) to respiration is called Respiratory sinus arrhythmia (RSA). This coupling has a subjective as well as a biologically soothing effect, and it is beneficial for cardiovascular function. RSA is seen to be more marked during slow-paced breathing and at lower respiration rates (0.1 Hz and below). In this study, we investigate how singing, which is a form of guided breathing, affects HRV and RSA. The study comprises a group of healthy 18 year olds of mixed gender. The subjects are asked to; (1) hum a single tone and breathe whenever they need to; (2) sing a hymn with free, unguided breathing; and (3) sing a slow mantra and breathe solely between phrases. Heart rate (HR) is measured continuously during the study. The study design makes it possible to compare above three levels of song structure. In a separate case study, we examine five individuals performing singing tasks (1-3). We collect data with more advanced equipment, simultaneously recording HR, respiration, skin conductance and finger temperature. We show how song structure, respiration and HR are connected. Unison singing of regular song structures makes the hearts of the singers accelerate and decelerate simultaneously. Implications concerning the effect on wellbeing and health are discussed as well as the question how this inner entrainment may affect perception and behavior.
Quantitative biomechanical models can identify control parameters that are used during movements, and movement parameters that are encoded by premotor neurons. We fit a mathematical dynamical systems model including subsyringeal pressure, syringeal biomechanics and upper-vocal-tract filtering to the songs of zebra finches. This reduces the dimensionality of singing dynamics, described as trajectories (motor ‘gestures’) in a space of syringeal pressure and tension. Here we assess model performance by characterizing the auditory response ‘replay’ of song premotor HVC neurons to the presentation of song variants in sleeping birds, and by examining HVC activity in singing birds. HVC projection neurons were excited and interneurons were suppressed within a few milliseconds of the extreme time points of the gesture trajectories. Thus, the HVC precisely encodes vocal motor output through activity at the times of extreme points of movement trajectories. We propose that the sequential activity of HVC neurons is used as a ‘forward’ model, representing the sequence of gestures in song to make predictions on expected behaviour and evaluate feedback.
- Journal of voice : official journal of the Voice Foundation
- Published almost 5 years ago
OBJECTIVE: Vocal accuracy of a sung performance can be evaluated by two methods: acoustic analyses and subjective judgments. Acoustic analyses have been presented as a more reliable solution but both methods are still used for the evaluation of singing voice accuracy. This article presents a first time direct comparison of these methods. METHODS: One hundred sixty-six untrained singers were asked to sing the popular song “Happy Birthday.” These recordings constituted the database analyzed. Acoustic analyses were performed to quantify the pitch interval deviation, number of contour errors, and number of tonality modulations for each recording. Additionally, 18 experts in singing voice or music rated the global pitch accuracy of these performances. RESULTS: A high correlation occurred between acoustic measurements and subjective rating. The total model of acoustic analyses explained 81% of the variance of the judges' scores. Their rating was influenced by both tonality modulations and pitch interval deviation. CONCLUSIONS: This study highlights the congruence between objective and subjective measurements of vocal accuracy within this first time comparison. Our results confirm the relevance of the pitch interval deviation criterion in vocal accuracy assessment. Furthermore, the number of tonality modulations is also a salient criterion in perceptive rating and should be taken into account in studies using acoustic analyses.
Although dolphins (Tursiops truncatus) have been trained to match numbers and durations of human vocal bursts  and reported to spontaneously match computer-generated whistles , spontaneous human voice mimicry has not previously been demonstrated. The first to study white whale (Delphinapterus leucas) sounds in the wild, Schevill and Lawrence  wrote that “occasionally the calls would suggest a crowd of children shouting in the distance”. Fish and Mowbary  described sound types and reviewed past descriptions of sounds from this vociferous species. At Vancouver Aquarium, Canada, keepers suggested that a white whale about 15 years of age, uttered his name “Lagosi”. Other utterances were not perceptible, being described as “garbled human voice, or Russian, or similar to Chinese” by R.L. Eaton in a self-published account in 1979. However, hitherto no acoustic recordings have shown how such sounds emulate speech and deviate from the usual calls of the species. We report here sound recordings and analysis which demonstrate spontaneous mimicry of the human voice, presumably a result of vocal learning , by a white whale.
Music and dance are two remarkable human characteristics that are closely related. Communication through integrated vocal and motional signals is also common in the courtship displays of birds. The contribution of songbird studies to our understanding of vocal learning has already shed some light on the cognitive underpinnings of musical ability. Moreover, recent pioneering research has begun to show how animals can synchronize their behaviors with external stimuli, like metronome beats. However, few studies have applied such perspectives to unraveling how animals can integrate multimodal communicative signals that have natural functions. Additionally, studies have rarely asked how well these behaviors are learned. With this in mind, here we cast a spotlight on an unusual animal behavior: non-vocal sound production associated with singing in the Java sparrow (Lonchura oryzivora), a songbird. We show that male Java sparrows coordinate their bill-click sounds with the syntax of their song-note sequences, similar to percussionists. Analysis showed that they produced clicks frequently toward the beginning of songs and before/after specific song notes. We also show that bill-clicking patterns are similar between social fathers and their sons, suggesting that these behaviors might be learned from models or linked to learning-based vocalizations. Individuals untutored by conspecifics also exhibited stereotypical bill-clicking patterns in relation to song-note sequence, indicating that while the production of bill clicking itself is intrinsic, its syncopation appears to develop with songs. This paints an intriguing picture in which non-vocal sounds are integrated with vocal courtship signals in a songbird, a model that we expect will contribute to the further understanding of multimodal communication.
Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants' attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech vs. hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken vs. sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song vs. a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing) was the principal contributor to infant attention, regardless of age.
Due to a lack of empirical data, the current understanding of the laryngeal mechanics in the passaggio regions (i.e., the fundamental frequency ranges where vocal registration events usually occur) of the female singing voice is still limited.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 1 year ago
Social processes profoundly influence speech and language acquisition. Despite the importance of social influences, little is known about how social interactions modulate vocal learning. Like humans, songbirds learn their vocalizations during development, and they provide an excellent opportunity to reveal mechanisms of social influences on vocal learning. Using yoked experimental designs, we demonstrate that social interactions with adult tutors for as little as 1 d significantly enhanced vocal learning. Social influences on attention to song seemed central to the social enhancement of learning because socially tutored birds were more attentive to the tutor’s songs than passively tutored birds, and because variation in attentiveness and in the social modulation of attention significantly predicted variation in vocal learning. Attention to song was influenced by both the nature and amount of tutor song: Pupils paid more attention to songs that tutors directed at them and to tutors that produced fewer songs. Tutors altered their song structure when directing songs at pupils in a manner that resembled how humans alter their vocalizations when speaking to infants, that was distinct from how tutors changed their songs when singing to females, and that could influence attention and learning. Furthermore, social interactions that rapidly enhanced learning increased the activity of noradrenergic and dopaminergic midbrain neurons. These data highlight striking parallels between humans and songbirds in the social modulation of vocal learning and suggest that social influences on attention and midbrain circuitry could represent shared mechanisms underlying the social modulation of vocal learning.
Vocal imitation involves incorporating instructive auditory information into relevant motor circuits through processes that are poorly understood. In zebra finches, we found that exposure to a tutor’s song drives spiking activity within premotor neurons in the juvenile, whereas inhibition suppresses such responses upon learning in adulthood. We measured inhibitory currents evoked by the tutor song throughout development while simultaneously quantifying each bird’s learning trajectory. Surprisingly, we found that the maturation of synaptic inhibition onto premotor neurons is correlated with learning but not age. We used synthetic tutoring to demonstrate that inhibition is selective for specific song elements that have already been learned and not those still in refinement. Our results suggest that structured inhibition plays a crucial role during song acquisition, enabling a piece-by-piece mastery of complex tasks.
Most mammals can accomplish acoustic recognition of other individuals by means of “voice cues,” whereby characteristics of the vocal tract render vocalizations of an individual uniquely identifiable. However, sound production in dolphins takes place in gas-filled nasal sacs that are affected by pressure changes, potentially resulting in a lack of reliable voice cues. It is well known that bottlenose dolphins learn to produce individually distinctive signature whistles for individual recognition, but it is not known whether they may also use voice cues. To investigate this question, we played back non-signature whistles to wild dolphins during brief capture-release events in Sarasota Bay, Florida. We hypothesized that non-signature whistles, which have varied contours that can be shared among individuals, would be recognizable to dolphins only if they contained voice cues. Following established methodology used in two previous sets of playback experiments, we found that dolphins did not respond differentially to non-signature whistles of close relatives versus known unrelated individuals. In contrast, our previous studies showed that in an identical context, dolphins reacted strongly to hearing the signature whistle or even a synthetic version of the signature whistle of a close relative. Thus, we conclude that dolphins likely do not use voice cues to identify individuals. The low reliability of voice cues and the need for individual recognition were likely strong selective forces in the evolution of vocal learning in dolphins.