Inspired by a theory of embodied music cognition, we investigate whether music can entrain the speed of beat synchronized walking. If human walking is in synchrony with the beat and all musical stimuli have the same duration and the same tempo, then differences in walking speed can only be the result of music-induced differences in stride length, thus reflecting the vigor or physical strength of the movement. Participants walked in an open field in synchrony with the beat of 52 different musical stimuli all having a tempo of 130 beats per minute and a meter of 4 beats. The walking speed was measured as the walked distance during a time interval of 30 seconds. The results reveal that some music is ‘activating’ in the sense that it increases the speed, and some music is ‘relaxing’ in the sense that it decreases the speed, compared to the spontaneous walked speed in response to metronome stimuli. Participants are consistent in their observation of qualitative differences between the relaxing and activating musical stimuli. Using regression analysis, it was possible to set up a predictive model using only four sonic features that explain 60% of the variance. The sonic features capture variation in loudness and pitch patterns at periods of three, four and six beats, suggesting that expressive patterns in music are responsible for the effect. The mechanism may be attributed to an attentional shift, a subliminal audio-motor entrainment mechanism, or an arousal effect, but further study is needed to figure this out. Overall, the study supports the hypothesis that recurrent patterns of fluctuation affecting the binary meter strength of the music may entrain the vigor of the movement. The study opens up new perspectives for understanding the relationship between entrainment and expressiveness, with the possibility to develop applications that can be used in domains such as sports and physical rehabilitation.
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.
It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates. This observation supports the vocal learning hypothesis that suggests rhythmic entrainment to be a by-product of the vocal learning mechanisms that are shared by several bird and mammal species, including humans, but that are only weakly developed, or missing entirely, in nonhuman primates. To test this hypothesis we measured auditory event-related potentials (ERPs) in two rhesus monkeys (Macaca mulatta), probing a well-documented component in humans, the mismatch negativity (MMN) to study rhythmic expectation. We demonstrate for the first time in rhesus monkeys that, in response to infrequent deviants in pitch that were presented in a continuous sound stream using an oddball paradigm, a comparable ERP component can be detected with negative deflections in early latencies (Experiment 1). Subsequently we tested whether rhesus monkeys can detect gaps (omissions at random positions in the sound stream; Experiment 2) and, using more complex stimuli, also the beat (omissions at the first position of a musical unit, i.e. the ‘downbeat’; Experiment 3). In contrast to what has been shown in human adults and newborns (using identical stimuli and experimental paradigm), the results suggest that rhesus monkeys are not able to detect the beat in music. These findings are in support of the hypothesis that beat induction (the cognitive mechanism that supports the perception of a regular pulse from a varying rhythm) is species-specific and absent in nonhuman primates. In addition, the findings support the auditory timing dissociation hypothesis, with rhesus monkeys being sensitive to rhythmic grouping (detecting the start of a rhythmic group), but not to the induced beat (detecting a regularity from a varying rhythm).
Rhythms, or patterns in time, play a vital role in both speech and music. Proficiency in a number of rhythm skills has been linked to language ability, suggesting that certain rhythmic processes in music and language rely on overlapping resources. However, a lack of understanding about how rhythm skills relate to each other has impeded progress in understanding how language relies on rhythm processing. In particular, it is unknown whether all rhythm skills are linked together, forming a single broad rhythmic competence, or whether there are multiple dissociable rhythm skills. We hypothesized that beat tapping and rhythm memory/sequencing form two separate clusters of rhythm skills. This hypothesis was tested with a battery of two beat tapping and two rhythm memory tests. Here we show that tapping to a metronome and the ability to adjust to a changing tempo while tapping to a metronome are related skills. The ability to remember rhythms and to drum along to repeating rhythmic sequences are also related. However, we found no relationship between beat tapping skills and rhythm memory skills. Thus, beat tapping and rhythm memory are dissociable rhythmic aptitudes. This discovery may inform future research disambiguating how distinct rhythm competencies track with specific language functions.
We assessed the calming effect of doppel, a wearable device that delivers heartbeat-like tactile stimulation on the wrist. We tested whether the use of doppel would have a calming effect on physiological arousal and subjective reports of state anxiety during the anticipation of public speech, a validated experimental task that is known to induce anxiety. Two groups of participants were tested in a single-blind design. Both groups wore the device on their wrist during the anticipation of public speech, and were given the cover story that the device was measuring blood pressure. For only one group, the device was turned on and delivered a slow heartbeat-like vibration. Participants in the doppel active condition displayed lower increases in skin conductance responses relative to baseline and reported lower anxiety levels compared to the control group. Therefore, the presence, as opposed to its absence, of a slow rhythm, which in the present study was instantiated as an auxiliary slow heartbeat delivered through doppel, had a significant calming effect on physiological arousal and subjective experience during a socially stressful situation. This finding is discussed in relation to past research on responses and entrainment to rhythms, and their effects on arousal and mood.
Disruption of the circadian clock, which directs rhythmic expression of numerous output genes, accelerates aging. To enquire how the circadian system protects aging organisms, here we compare circadian transcriptomes in heads of young and old Drosophila melanogaster. The core clock and most output genes remained robustly rhythmic in old flies, while others lost rhythmicity with age, resulting in constitutive over- or under-expression. Unexpectedly, we identify a subset of genes that adopted increased or de novo rhythmicity during aging, enriched for stress-response functions. These genes, termed late-life cyclers, were also rhythmically induced in young flies by constant exposure to exogenous oxidative stress, and this upregulation is CLOCK-dependent. We also identify age-onset rhythmicity in several putative primary piRNA transcripts overlapping antisense transposons. Our results suggest that, as organisms age, the circadian system shifts greater regulatory priority to the mitigation of accumulating cellular stress.
Diagnosing atrial fibrillation (AF) before ischemic stroke occurs is a priority for stroke prevention in AF. Smartphone camera-based photoplethysmographic (PPG) pulse waveform measurement discriminates between different heart rhythms, but its ability to diagnose AF in real-world situations has not been adequately investigated. We sought to assess the diagnostic performance of a standalone smartphone PPG application, Cardiio Rhythm, for AF screening in primary care setting.
Superior time perception for lower musical pitch explains why bass-ranged instruments lay down musical rhythms
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 6 years ago
The auditory environment typically contains several sound sources that overlap in time, and the auditory system parses the complex sound wave into streams or voices that represent the various sound sources. Music is also often polyphonic. Interestingly, the main melody (spectral/pitch information) is most often carried by the highest-pitched voice, and the rhythm (temporal foundation) is most often laid down by the lowest-pitched voice. Previous work using electroencephalography (EEG) demonstrated that the auditory cortex encodes pitch more robustly in the higher of two simultaneous tones or melodies, and modeling work indicated that this high-voice superiority for pitch originates in the sensory periphery. Here, we investigated the neural basis of carrying rhythmic timing information in lower-pitched voices. We presented simultaneous high-pitched and low-pitched tones in an isochronous stream and occasionally presented either the higher or the lower tone 50 ms earlier than expected, while leaving the other tone at the expected time. EEG recordings revealed that mismatch negativity responses were larger for timing deviants of the lower tones, indicating better timing encoding for lower-pitched compared with higher-pitch tones at the level of auditory cortex. A behavioral motor task revealed that tapping synchronization was more influenced by the lower-pitched stream. Results from a biologically plausible model of the auditory periphery suggest that nonlinear cochlear dynamics contribute to the observed effect. The low-voice superiority effect for encoding timing explains the widespread musical practice of carrying rhythm in bass-ranged instruments and complements previously established high-voice superiority effects for pitch and melody.
Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s.
In The Descent of Man, Darwin speculated that our capacity for musical rhythm reflects basic aspects of brain function broadly shared among animals. Although this remains an appealing idea, it is being challenged by modern cross-species research. This research hints that our capacity to synchronize to a beat, i.e., to move in time with a perceived pulse in a manner that is predictive and flexible across a broad range of tempi, may be shared by only a few other species. Is this really the case? If so, it would have important implications for our understanding of the evolution of human musicality.