SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Melody

1

Pitch discrimination tasks typically engage the superior temporal gyrus and the right inferior frontal gyrus. It is currently unclear whether these regions are equally involved in the processing of incongruous notes in melodies, which requires the representation of musical structure (tonality) in addition to pitch discrimination. To this aim, 14 participants completed two tasks while undergoing functional magnetic resonance imaging, one in which they had to identify a pitch change in a series of non-melodic repeating tones and a second in which they had to identify an incongruous note in a tonal melody. In both tasks, the deviants activated the right superior temporal gyrus. A contrast between deviants in the melodic task and deviants in the non-melodic task (melodic > non-melodic) revealed additional activity in the right inferior parietal lobule. Activation in the inferior parietal lobule likely represents processes related to the maintenance of tonal pitch structure in working memory during pitch discrimination.

Concepts: Magnetic resonance imaging, Temporal lobe, Cerebrum, Superior temporal gyrus, Inferior frontal gyrus, Music, Theme, Melody

1

The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.

Concepts: Neuroscience, Phrase, Period, Sound, Music, Event-related potential, Melody, Motif

0

Western music is based on intervals; thus, interval discrimination is important for distinguishing the character of melodies or tracking melodies in polyphonic music. In this study the encoding of intervals in simultaneously presented sound is studied.

Concepts: Music, Melody, Consonance and dissonance, Harmony, Octave, Renaissance music, Medieval music, Unison

0

Proper segmentation of auditory streams is essential for understanding music. Many cues, including meter, melodic contour, and harmony, influence adults' perception of musical phrase boundaries. To date, no studies have examined young children’s musical grouping in a production task. We used a musical self-pacing method to investigate (1) whether dwell times index young children’s musical phrase grouping and, if so, (2) whether children dwell longer on phrase boundaries defined by harmonic cues specifically. In Experiment 1, we asked 3-year-old children to self-pace through chord progressions from Bach chorales (sequences in which metrical, harmonic, and melodic contour grouping cues aligned) by pressing a computer key to present each chord in the sequence. Participants dwelled longer on chords in the 8th position, which corresponded to phrase endings. In Experiment 2, we tested 3-, 4-, and 7-year-old children’s sensitivity to harmonic cues to phrase grouping when metrical regularity cues and melodic contour cues were misaligned with the harmonic phrase boundaries. In this case, 7 and 4 year olds but not 3 year olds dwelled longer on harmonic phrase boundaries, suggesting that the influence of harmonic cues on phrase boundary perception develops substantially between 3 and 4 years of age in Western children. Overall, we show that the musical dwell time method is child-friendly and can be used to investigate various aspects of young children’s musical understanding, including phrase grouping and harmonic knowledge. (PsycINFO Database Record

Concepts: Music, Chord, Melody, Harmony, Motif, Chord progression, Cadence, Chorale

0

People easily recognize a familiar melody in a previously unheard key, but they also retain some key-specific information. Does the recognition of a transposed melody depend on either pitch distance or harmonic distance from the initially learned instances? Previous research has shown a stronger effect of pitch closeness than of harmonic similarity, but did not directly test for an additional effect of the latter variable. In the present experiment, we familiarized participants with a simple eight-note melody in two different keys (C and D) and then tested their ability to discriminate the target melody from foils in other keys. The transpositions included were to the keys of C# (close in pitch height, but harmonically distant), G (more distant in pitch, but harmonically close), and F# (more distant in pitch and harmonically distant). Across participants, the transpositions to F# and G were either higher or lower than the initially trained melodies, so that their average pitch distances from C and D were equated. A signal detection theory analysis confirmed that discriminability (d') was better for targets and foils that were close in pitch distance to the studied exemplars. Harmonic similarity had no effect on discriminability, but it did affect response bias ©, in that harmonic similarity to the studied exemplars increased both hits and false alarms. Thus, both pitch distance and harmonic distance affect the recognition of transposed melodies, but with dissociable effects on discrimination and response bias.

Concepts: Effect, Affect, Detection theory, Length, Melody, Harmony, Psychophysics, Constant false alarm rate

0

Transcatheter pulmonary valve implantation is established as a valuable option to reconstruct failing right ventricular outflow tract function. Percutaneous tricuspid valve-in-valve or valve-in-ring reconstruction is even applied with increasing acceptance. A 46-year-old woman with a diagnosis of carcinoid-dependent right heart failure underwent surgical bioprosthetic tricuspid and pulmonary valve replacement. Almost 1 year later, she presented again with markedly dilatated and reduced right heart function caused by degeneration of both biologic valves. We report a successful two-stage percutaneous transcatheter double-valve replacement with the use of a Melody valve in pulmonary and tricuspid positions.

Concepts: Cardiology, Heart, Pulmonary artery, Right ventricle, Tricuspid valve, Melody, Pulmonary valve

0

Different studies have shown that action-effect associations seem to enhance implicit learning of motor sequences. In a recent study (Haider et al., Conscious Cognit 26:145-161, 2014), we found indications that action-effect learning might play a special role in acquiring explicit knowledge within an implicit learning situation. The current study aims at directly manipulating the action-effect contingencies in a Serial Reaction Time Task and examining its impact on explicit sequence knowledge. For this purpose, we created a situation in which the participants' responses led to a melodic tone sequence. For one group, these effect tones were contingently bound to the sequential responses and immediately followed the key press; for the second group, the tones were delayed by 400 ms. For a third group, the tones also followed the response immediately and resulted in the same melody but were not contingently bound to the responses. A fourth control group received no effect tones at all. Only the group that experienced contingent effect tones that directly followed the response showed an increase in explicit sequence knowledge. The results are discussed in terms of the multi-modal structure of action-effect associations and the ideomotor principle of action control.

Concepts: Series, Sequence, Learning, Knowledge, Permutation, Set, Melody, Monoid

0

Individuals with congenital amusia usually exhibit impairments in melodic contour processing when asked to compare pairs of melodies that may or may not be identical to one another. However, it is unclear whether the impairment observed in contour processing is caused by an impairment of pitch discrimination, or is a consequence of poor pitch memory. To help resolve this ambiguity, we designed a novel Self-paced Audio-visual Contour Task (SACT) that evaluates sensitivity to contour while placing minimal burden on memory. In this task, participants control the pace of an auditory contour that is simultaneously accompanied by a visual contour, and they are asked to judge whether the two contours are congruent or incongruent. In Experiment 1, melodic contours varying in pitch were presented with a series of dots that varied in spatial height. Amusics exhibited reduced sensitivity to audio-visual congruency in comparison to control participants. To exclude the possibility that the impairment arises from a general deficit in cross-modal mapping, Experiment 2 examined sensitivity to cross-modal mapping for two other auditory dimensions: timbral brightness and loudness. Amusics and controls were significantly more sensitive to large than small contour changes, and to changes in loudness than changes in timbre. However, there were no group differences in cross-modal mapping, suggesting that individuals with congenital amusia can comprehend spatial representations of acoustic information. Taken together, the findings indicate that pitch contour processing in congenital amusia remains impaired even when pitch memory is relatively unburdened.

Concepts: Pitch, Melody, Timbre, Klangfarbenmelodie, Melodic motion

0

In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognised is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also their components (such as pitches, pitch intervals, short phrases, and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should: a) disrupt recency-in-memory advantages; and b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture’s predictions may lead to greater understanding of the fundamental relationships between memory, perception, and behavior.

Concepts: Psychology, Mathematics, Understanding, Cognition, Hypothesis, Pitch, Melody, Musical tuning

0

In music, a melodic motif is often played repeatedly in different pitch ranges and at different times. Event-related potential (ERP) studies have shown that the mismatch negativity (MMN) reflects memory trace processing that encodes two separate melodic lines (“voices”) with different motifs. Here we investigated whether a single motif presented in two voices is encoded as a single entity or two separate entities, and whether motifs overlapping in time impede or enhance encoding strength. Electroencephalogram (EEG) from 11 musically-trained participants was recorded while they passively listened to sequences of 5-note motifs where the 5th note either descended (standard) or ascended (deviant) relative to the previous note (20% deviant rate). Motifs were presented either in one pitch range, or alternated between two pitch ranges, creating an “upper” and a “lower” voice. Further, motifs were either temporally isolated (silence in between), or temporally concurrent with two tones overlapping. When motifs were temporally isolated, MMN amplitude in the one-pitch-range condition was similar to that in the two-pitch-range upper voice. In contrast, no MMN, but P3a, was observed in the two-pitch-range lower voice. When motifs were temporally concurrent and presented in two pitch ranges, MMN exhibited a more posterior distribution in the upper voice, but again, was absent in the lower voice. These results suggest that motifs presented in two separate voices are not encoded entirely independently, but hierarchically, causing asymmetry between the upper and lower voice encoding even when no simultaneous pitches are presented.

Concepts: Electroencephalography, Evoked potential, Sound, Event-related potential, Encoder, Encoding, Melody, Motif