We investigated the neural correlates induced by prenatal exposure to melodies using brains' event-related potentials (ERPs). During the last trimester of pregnancy, the mothers in the learning group played the ‘Twinkle twinkle little star’ -melody 5 times per week. After birth and again at the age of 4 months, we played the infants a modified melody in which some of the notes were changed while ERPs to unchanged and changed notes were recorded. The ERPs were also recorded from a control group, who received no prenatal stimulation. Both at birth and at the age of 4 months, infants in the learning group had stronger ERPs to the unchanged notes than the control group. Furthermore, the ERP amplitudes to the changed and unchanged notes at birth were correlated with the amount of prenatal exposure. Our results show that extensive prenatal exposure to a melody induces neural representations that last for several months.
We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of a given musical corpus. Instead of using the n-body interactions of (n-1)-order Markov models, traditionally used in automatic music generation, we use a k-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don’t need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. Our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, our scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation.
Contact inhibition of locomotion (CIL) is a multifaceted process that causes many cell types to repel each other upon collision. During development, this seemingly uncoordinated reaction is a critical driver of cellular dispersion within embryonic tissues. Here, we show that Drosophila hemocytes require a precisely orchestrated CIL response for their developmental dispersal. Hemocyte collision and subsequent repulsion involves a stereotyped sequence of kinematic stages that are modulated by global changes in cytoskeletal dynamics. Tracking actin retrograde flow within hemocytes in vivo reveals synchronous reorganization of colliding actin networks through engagement of an inter-cellular adhesion. This inter-cellular actin-clutch leads to a subsequent build-up in lamellar tension, triggering the development of a transient stress fiber, which orchestrates cellular repulsion. Our findings reveal that the physical coupling of the flowing actin networks during CIL acts as a mechanotransducer, allowing cells to haptically sense each other and coordinate their behaviors.
Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch perception and memory in congenital amusia. They further support the hypothesis that in neurodevelopmental disorders impacting high-level functions (here musical abilities), abnormalities in cerebral processing can be observed in early brain responses.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.
For 1 to 2 weeks, 5-month-old infants listened at home to one of two novel songs with identical lyrics and rhythms, but different melodies; the song was sung by a parent, emanated from a toy, or was sung live by a friendly but unfamiliar adult first in person and subsequently via interactive video. We then tested the infants' selective attention to two novel individuals after one sang the familiar song and the other sang the unfamiliar song. Infants who had experienced a parent singing looked longer at the new person who had sung the familiar melody than at the new person who had sung the unfamiliar melody, and the amount of song exposure at home predicted the size of that preference. Neither effect was observed, however, among infants who had heard the song emanating from a toy or being sung by a socially unrelated person, despite these infants' remarkable memory for the familiar melody, tested an average of more than 8 months later. These findings suggest that melodies produced live and experienced at home by known social partners carry social meaning for infants.
We studied human melody perception and production in a songbird in the light of current concepts from the cognitive neuroscience of music. Bullfinches are the species best known for learning melodies from human teachers. The study is based on the historical data of 15 bullfinches, raised by 3 different human tutors and studied later by Jürgen Nicolai (JN) in the period 1967-1975. These hand-raised bullfinches learned human folk melodies (sequences of 20-50 notes) accurately. The tutoring was interactive and variable, starting before fledging and JN continued it later throughout the birds' lives. All 15 bullfinches learned to sing alternately melody modules with JN (alternate singing). We focus on the aspects of note sequencing and timing studying song variability when singing the learned melody alone and the accuracy of listening-singing interactions during alternatively singing with JN by analyzing song recordings of 5 different males. The following results were obtained as follows: (1) Sequencing: The note sequence variability when singing alone suggests that the bullfinches retrieve the note sequence from the memory as different sets of note groups (=modules), as chunks (sensu Miller in Psychol Rev 63:81-87, 1956). (2) Auditory-motor interactions, the coupling of listening and singing the human melody: Alternate singing provides insights into the bird’s brain melody processing from listening to the actually whistled part of the human melody by JN to the bird’s own accurately singing the consecutive parts. We document how variable and correctly bullfinches and JN alternated in their singing the note sequences. Alternate singing demonstrates that melody-singing bullfinches did not only follow attentively the just whistled note contribution of the human by auditory feedback, but also could synchronously anticipate singing the consecutive part of the learned melody. These data suggest that both listening and singing may depend on a single learned human melody representation (=coupling between perception and production).
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise.
Parkinson’s disease is characterized not only by bradykinesia, rigidity, and tremor, but also by impairments of expressive and receptive linguistic prosody. The facilitating effect of music with a salient beat on patients' gait suggests that it might have a similar effect on vocal behavior, however it is currently unknown whether singing is affected by the disease. In the present study, fifteen Parkinson patients were compared with fifteen healthy controls during the singing of familiar melodies and improvised melodic continuations. While patients' speech could reliably be distinguished from that of healthy controls matched for age and gender, purely on the basis of aural perception, no significant differences in singing were observed, either in pitch, pitch range, pitch variability, and tempo, or in scale tone distribution, interval size or interval variability. The apparent dissociation of speech and singing in Parkinson’s disease suggests that music could be used to facilitate expressive linguistic prosody.
Pitch discrimination tasks typically engage the superior temporal gyrus and the right inferior frontal gyrus. It is currently unclear whether these regions are equally involved in the processing of incongruous notes in melodies, which requires the representation of musical structure (tonality) in addition to pitch discrimination. To this aim, 14 participants completed two tasks while undergoing functional magnetic resonance imaging, one in which they had to identify a pitch change in a series of non-melodic repeating tones and a second in which they had to identify an incongruous note in a tonal melody. In both tasks, the deviants activated the right superior temporal gyrus. A contrast between deviants in the melodic task and deviants in the non-melodic task (melodic > non-melodic) revealed additional activity in the right inferior parietal lobule. Activation in the inferior parietal lobule likely represents processes related to the maintenance of tonal pitch structure in working memory during pitch discrimination.