Concept: Musical tuning
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 4 years ago
The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.
During the breeding season, male koalas produce ‘bellow’ vocalisations that are characterised by a continuous series of inhalation and exhalation sections, and an extremely low fundamental frequency (the main acoustic correlate of perceived pitch) . Remarkably, the fundamental frequency (F0) of bellow inhalation sections averages 27.1 Hz (range: 9.8-61.5 Hz ), which is 20 times lower than would be expected for an animal weighing 8 kg  and more typical of an animal the size of an elephant (Supplemental figure S1A). Here, we demonstrate that koalas use a novel vocal organ to produce their unusually low-pitched mating calls.
Overtone-based pitch selection in hermit thrush song: Unexpected convergence with scale construction in human music
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 5 years ago
Many human musical scales, including the diatonic major scale prevalent in Western music, are built partially or entirely from intervals (ratios between adjacent frequencies) corresponding to small-integer proportions drawn from the harmonic series. Scientists have long debated the extent to which principles of scale generation in human music are biologically or culturally determined. Data from animal “song” may provide new insights into this discussion. Here, by examining pitch relationships using both a simple linear regression model and a Bayesian generative model, we show that most songs of the hermit thrush (Catharus guttatus) favor simple frequency ratios derived from the harmonic (or overtone) series. Furthermore, we show that this frequency selection results not from physical constraints governing peripheral production mechanisms but from active selection at a central level. These data provide the most rigorous empirical evidence to date of a bird song that makes use of the same mathematical principles that underlie Western and many non-Western musical scales, demonstrating surprising convergence between human and animal “song cultures.” Although there is no evidence that the songs of most bird species follow the overtone series, our findings add to a small but growing body of research showing that a preference for small-integer frequency ratios is not unique to humans. These findings thus have important implications for current debates about the origins of human musical systems and may call for a reevaluation of existing theories of musical consonance based on specific human vocal characteristics.
Young children learn multiple cognitive skills concurrently (e.g., language and music). Evidence is limited as to whether and how learning in one domain affects that in another during early development. Here we assessed whether exposure to a tone language benefits musical pitch processing among 3-5-year-old children. More specifically, we compared the pitch perception of Chinese children who spoke a tone language (i.e., Mandarin) with English-speaking American children. We found that Mandarin-speaking children were more advanced at pitch processing than English-speaking children but both groups performed similarly on a control music task (timbre discrimination). The findings support the Pitch Generalization Hypothesis that tone languages drive attention to pitch in nonlinguistic contexts, and suggest that language learning benefits aspects of music perception in early development. A video abstract of this article can be viewed at: https://youtu.be/UY0kpGpPNA0.
Humans speak to dogs using a special speech register called Pet-Directed Speech (PDS) which is very similar to Infant-Directed Speech (IDS) used by parents when talking to young infants. These two type of speech share prosodic features that are distinct from the typical Adult-Directed Speech (ADS): a high pitched voice and an increased pitch variation. So far, only one study has investigated the effect of PDS on dogs' attention. We video recorded 44 adult pet dogs and 19 puppies when listening to the same phrase enounced either in ADS or in PDS or in IDS. The phrases were previously recorded and were broadcasted via a loudspeaker placed in front of the dog. The total gaze duration of the dogs toward the loudspeaker, was used as a proxy of attention. Results show that adult dogs are significantly more attentive to PDS than to ADS and that their attention significantly increases along with the rise of the fundamental frequency of human' speech. It is likely that the exaggerated prosody of PDS is used by owners as an ostensive cue for dogs that facilitates the effectiveness of their communication, and should represent an evolutionarily determined adaptation that benefits the regulation and maintenance of their relationships.
Music and speech are often placed alongside one another as comparative cases. Their relative overlaps and disassociations have been well explored (e.g., Patel, 2008). But one key attribute distinguishing these two domains has often been overlooked: the greater preponderance of repetition in music in comparison to speech. Recent fMRI studies have shown that familiarity - achieved through repetition - is a critical component of emotional engagement with music (Pereira et al., 2011). If repetition is fundamental to emotional responses to music, and repetition is a key distinguisher between the domains of music and speech, then close examination of the phenomenon of repetition might help clarify the ways that music elicits emotion differently than speech.
The phenomenon of “remote synchronization” (RS), first observed in a star network of oscillators, involves synchronization of unconnected peripheral nodes through a hub that maintains independent dynamics. In the RS regime the central hub was thought to serve as a passive gate for information transfer between nodes. Here, we investigate the physical origin of this phenomenon. Surprisingly, we find that a hub node can drive remote synchronization of peripheral oscillators even in the presence of a repulsive mean field, thus actively governing network dynamics while remaining asynchronous. We study this novel phenomenon in complex networks endowed with multiple hub-nodes, a ubiquitous feature of many real-world systems, including brain connectivity networks. We show that a change in the natural frequency of a single hub can alone reshape synchronization patterns across the entire network, and switch from direct to remote synchronization, or to hub-driven desynchronization. Hub-driven RS may provide a mechanism to account for the role of structural hubs in the organization of brain functional connectivity networks.
We tested non-musicians and musicians in an auditory psychophysical experiment to assess the effects of timbre manipulation on pitch-interval discrimination. Both groups were asked to indicate the larger of two presented intervals, comprised of four sequentially presented pitches; the second or fourth stimulus within a trial was either a sinusoidal (or “pure”), flute, piano, or synthetic voice tone, while the remaining three stimuli were all pure tones. The interval-discrimination tasks were administered parametrically to assess performance across varying pitch distances between intervals (“interval-differences”). Irrespective of timbre, musicians displayed a steady improvement across interval-differences, while non-musicians only demonstrated enhanced interval discrimination at an interval-difference of 100 cents (one semitone in Western music). Surprisingly, the best discrimination performance across both groups was observed with pure-tone intervals, followed by intervals containing a piano tone. More specifically, we observed that: 1) timbre changes within a trial affect interval discrimination; and 2) the broad spectral characteristics of an instrumental timbre may influence perceived pitch or interval magnitude and make interval discrimination more difficult.
Quartz-enhanced photoacoustic spectroscopy (QEPAS) is a sensitive gas detection technique which requires frequent calibration and has a long response time. Here we report beat frequency (BF) QEPAS that can be used for ultra-sensitive calibration-free trace-gas detection and fast spectral scan applications. The resonance frequency and Q-factor of the quartz tuning fork (QTF) as well as the trace-gas concentration can be obtained simultaneously by detecting the beat frequency signal generated when the transient response signal of the QTF is demodulated at its non-resonance frequency. Hence, BF-QEPAS avoids a calibration process and permits continuous monitoring of a targeted trace gas. Three semiconductor lasers were selected as the excitation source to verify the performance of the BF-QEPAS technique. The BF-QEPAS method is capable of measuring lower trace-gas concentration levels with shorter averaging times as compared to conventional PAS and QEPAS techniques and determines the electrical QTF parameters precisely.