Concept: Musical instrument
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 7 years ago
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound’s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.
For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI) users, we conducted an electroencephalography (EEG) study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls), as revealed by auditory evoked potentials (AEPs). Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN). Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users' AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.
The integration of frailty measures in clinical practice is crucial for the development of interventions against disabling conditions in older persons. The frailty phenotype (proposed and validated by Fried and colleagues in the Cardiovascular Health Study) and the Frailty Index (proposed and validated by Rockwood and colleagues in the Canadian Study of Health and Aging) represent the most known operational definitions of frailty in older persons. Unfortunately, they are often wrongly considered as alternatives and/or substitutables. These two instruments are indeed very different and should rather be considered as complementary. In the present paper, we discuss about the designs and rationals of the two instruments, proposing the correct ways for having them implemented in the clinical setting.
Reading music and playing a musical instrument is a complex activity that comprises motor and multisensory (auditory, visual, and somatosensory) integration in a unique way. Music has also a well-known impact on the emotional state, while it can be a motivating activity. For those reasons, musical training has become a useful framework to study brain plasticity. Our aim was to study the specific effects of musical training vs. the effects of other leisure activities in elderly people. With that purpose we evaluated the impact of piano training on cognitive function, mood and quality of life (QOL) in older adults. A group of participants that received piano lessons and did daily training for 4-month (n = 13) was compared to an age-matched control group (n = 16) that participated in other types of leisure activities (physical exercise, computer lessons, painting lessons, among other). An exhaustive assessment that included neuropsychological tests as well as mood and QOL questionnaires was carried out before starting the piano program and immediately after finishing (4 months later) in the two groups. We found a significant improvement on the piano training group on the Stroop test that measures executive function, inhibitory control and divided attention. Furthermore, a trend indicating an enhancement of visual scanning and motor ability was also found (Trial Making Test part A). Finally, in our study piano lessons decreased depression, induced positive mood states, and improved the psychological and physical QOL of the elderly. Our results suggest that playing piano and learning to read music can be a useful intervention in older adults to promote cognitive reserve (CR) and improve subjective well-being.
- Child's nervous system : ChNS : official journal of the International Society for Pediatric Neurosurgery
- Published almost 8 years ago
The authors illustrate the cases of two children with headaches, one diagnosed with Chiari type 1 malformation and the other with hydrocephalus, who played wind instruments. Both patients manifested that their headaches worsened with the efforts made during playing their musical instruments. We briefly comment on the probable role played by this activity on the patients' intracranial pressure and hypothesize that the headaches might be influenced by increases in their intracranial pressure related to Valsalva maneuvers. We had serious doubts on if we should advise our young patients about giving up playing their music instruments.
Reticulocytes are the most sensitive index available to authorities who seek to sanction athletes for blood doping based on deviations beyond individual reference ranges. Because such data comprise longitudinal results that are generated by different laboratories, the comparability of reticulocyte counts from different instruments is of crucial importance.
We describe a course in musical acoustics that required undergraduate students to design and build unique musical instruments, compose music for ensembles of them, and then perform the compositions in a public concert. Unlike most courses in musical acoustics which require the students to build home-made instruments as a final project, the construction of the instrument and composing original music were the primary goals of the course. The instruments were required to be artistic, visually interesting and play a pitch collection from the Western scale. We will describe the challenges and successes, show examples of the instruments, and review the lessons learned.
The low frequency vibrations of two-headed musical drums are known to couple. However, little is known regarding the factors that determine the degree of coupling at higher frequencies. In this study, the effects that depth, diameter, and head tension have on the tendency of the drumheads to couple are investigated. Commercial finite element software was used to model a wide range of drums and to identify trends of coupling according to these factors. The numerical results were used to guide which parameters should be tested in the lab. Experimentally, two oppositely facing Electronic Speckle-Pattern Interferometry systems were used to optically view the simultaneous vibrations of both heads of a drum. Several snare and tom tom drums with different diameters, depths, and tensions were observed. To closely analyze the effect of drum depth a tom tom was modified so a range of depths from 1.5'‘ to 40’‘ could be tested while keeping the diameter and tension constant. The optical and numerical data are used to illustrate trends in the coupling of musical drumheads.
The flute has a very inefficient energy use, where <1% of the blowing energy ends in the produced sound caused by the high impedance of the blowing hole suggesting a heavy damping in the system. Such damping is known from turbulent flows with high Reynolds numbers. In a FEM simulation of the Navier-Stokes equation of the flute, the inflow of blowing velocity into the flue shows unrealistic values. An alternative is the reasoning of Kolmogorov about turbulent flow as a cascade of vortices satisfying the scaling law as a steady-slope in a log-log relation of vortices size and damping. This is incorporated in the Navier-Stokes model as Reynolds-Averaged-Navier-Stokes model. In a FEM simulation of the flute using this model, the inflow of energy into the flute is modeled realistically, pointing to damping as the crucial factor in flute sound production. As a second example, a transient FEM model of a flow-structure coupling of a saxophone mouthpiece also shows strong damping of the inflow into the mouthpiece due to strong turbulence in the mouthpiece. Only because of this strong damping the system coupled to a tube results in stable oscillations and therefore in a stable tone production.