- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 5 years ago
The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner’s law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
There is an increasing concern that anthropogenic noise could have a significant impact on the marine environment, but there is still insufficient data for most invertebrates. What do they perceive? We investigated this question in oysters Magallana gigas (Crassostrea gigas) using pure tone exposures, accelerometer fixed on the oyster shell and hydrophone in the water column. Groups of 16 oysters were exposed to quantifiable waterborne sinusoidal sounds in the range of 10 Hz to 20 kHz at various acoustic energies. The experiment was conducted in running seawater using an experimental flume equipped with suspended loudspeakers. The sensitivity of the oysters was measured by recording their valve movements by high-frequency noninvasive valvometry. The tests were 3 min tone exposures including a 70 sec fade-in period. Three endpoints were analysed: the ratio of responding individuals in the group, the resulting changes of valve opening amplitude and the response latency. At high enough acoustic energy, oysters transiently closed their valves in response to frequencies in the range of 10 to <1000 Hz, with maximum sensitivity from 10 to 200 Hz. The minimum acoustic energy required to elicit a response was 0.02 m∙s-2 at 122 dBrms re 1 μPa for frequencies ranging from 10 to 80 Hz. As a partial valve closure cannot be differentiated from a nociceptive response, it is very likely that oysters detect sounds at lower acoustic energy. The mechanism involved in sound detection and the ecological consequences are discussed.
The “just noticeable difference” (JND) represents the minimum amount by which a stimulus must change to produce a noticeable variation in one’s perceptual experience (i.e., Weber’s law). Recent work has shown that within-participant standard deviations of grip aperture (i.e., JNDs) increase linearly with increasing object size during the early, but not the late, stages of goal-directed grasping. A visually based explanation for this finding is that the early and late stages of grasping are respectively mediated by relative and absolute visual information and therefore render a time-dependent adherence to Weber’s law. Alternatively, a motor-based explanation contends that the larger aperture shaping impulses required for larger objects gives rise to a stochastic increase in the variability of motor output (i.e., impulse-variability hypothesis). To test the second explanation, we had participants grasp differently sized objects in grasping time criteria of 400 and 800 ms. Thus, the 400 ms condition required larger aperture shaping impulses than the 800 ms condition. In line with previous work, JNDs during early aperture shaping (i.e., at the time of peak aperture acceleration and peak aperture velocity) for both the 400 and 800 ms conditions scaled linearly with object size, whereas JNDs later in the response (i.e., at the time of peak grip aperture) did not. Moreover, the 400 and 800 ms conditions produced comparable slopes relating JNDs to object size. In other words, larger aperture shaping impulses did not give rise to a stochastic increase in aperture variability at each object size. As such, the theoretical tenets of the impulse-variability hypothesis do not provide a viable framework for the time-dependent scaling of JNDs to object size. Instead, we propose that a dynamic interplay between relative and absolute visual information gives rise to grasp trajectories that exhibit an early adherence and late violation to Weber’s law.
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same ‘skill’ could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli.
The training response of an intensified period of high-intensity exercise is not clear. Therefore, we compared the cardiovascular adaptations of completing 24 high-intensity aerobic interval training sessions carried out for either three or eight weeks, respectively.
Superior time perception for lower musical pitch explains why bass-ranged instruments lay down musical rhythms
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 6 years ago
The auditory environment typically contains several sound sources that overlap in time, and the auditory system parses the complex sound wave into streams or voices that represent the various sound sources. Music is also often polyphonic. Interestingly, the main melody (spectral/pitch information) is most often carried by the highest-pitched voice, and the rhythm (temporal foundation) is most often laid down by the lowest-pitched voice. Previous work using electroencephalography (EEG) demonstrated that the auditory cortex encodes pitch more robustly in the higher of two simultaneous tones or melodies, and modeling work indicated that this high-voice superiority for pitch originates in the sensory periphery. Here, we investigated the neural basis of carrying rhythmic timing information in lower-pitched voices. We presented simultaneous high-pitched and low-pitched tones in an isochronous stream and occasionally presented either the higher or the lower tone 50 ms earlier than expected, while leaving the other tone at the expected time. EEG recordings revealed that mismatch negativity responses were larger for timing deviants of the lower tones, indicating better timing encoding for lower-pitched compared with higher-pitch tones at the level of auditory cortex. A behavioral motor task revealed that tapping synchronization was more influenced by the lower-pitched stream. Results from a biologically plausible model of the auditory periphery suggest that nonlinear cochlear dynamics contribute to the observed effect. The low-voice superiority effect for encoding timing explains the widespread musical practice of carrying rhythm in bass-ranged instruments and complements previously established high-voice superiority effects for pitch and melody.
Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers' performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.
Objective This study aimed to compare the timbre recognition and preferences of young adolescents with cochlear implants (CIs) to that of adolescents with normal hearing (NH). Methods Nine Korean adolescents with CIs and 25 adolescents with NH participated in this study. After listening to each of four Western instruments and five traditional Korean instruments, participants were asked to identify presented instruments and rate how much they liked the timbres. Results The results showed that the CI group recognized instruments significantly less often than the NH group. They also tended to show a relatively higher recognition of the instruments bearing a rapid and strong attack time. With regard to timbre preferences, no significant differences were found between the groups. Discussion Young aolescents with CIs show potential for detecting salient features in sound information, especially instrumental timbre. This study indicates what can be considered to incorporate more sounds with varying origins and tone qualities into music perception and education for this population.
- IEEE transactions on visualization and computer graphics
- Published about 5 years ago
Models of human perception - including perceptual “laws” - can be valuable tools for deriving visualization design recommendations. However, it is important to assess the explanatory power of such models when using them to inform design. We present a secondary analysis of data previously used to rank the effectiveness of bivariate visualizations for assessing correlation (measured with Pearson’s r) according to the well-known Weber-Fechner Law. Beginning with the model of Harrison et al. , we present a sequence of refinements including incorporation of individual differences, log transformation, censored regression, and adoption of Bayesian statistics. Our model incorporates all observations dropped from the original analysis, including data near ceilings caused by the data collection process and entire visualizations dropped due to large numbers of observations worse than chance. This model deviates from Weber’s Law, but provides improved predictive accuracy and generalization. Using Bayesian credibility intervals, we derive a partial ranking that groups visualizations with similar performance, and we give precise estimates of the difference in performance between these groups. We find that compared to other visualizations, scatterplots are unique in combining low variance between individuals and high precision on both positively- and negatively-correlated data. We conclude with a discussion of the value of data sharing and replication, and share implications for modeling similar experimental data.