SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Phonology

286

We present evidence that the geographic context in which a language is spoken may directly impact its phonological form. We examined the geographic coordinates and elevations of 567 language locations represented in a worldwide phonetic database. Languages with phonemic ejective consonants were found to occur closer to inhabitable regions of high elevation, when contrasted to languages without this class of sounds. In addition, the mean and median elevations of the locations of languages with ejectives were found to be comparatively high. The patterns uncovered surface on all major world landmasses, and are not the result of the influence of particular language families. They reflect a significant and positive worldwide correlation between elevation and the likelihood that a language employs ejective phonemes. In addition to documenting this correlation in detail, we offer two plausible motivations for its existence. We suggest that ejective sounds might be facilitated at higher elevations due to the associated decrease in ambient air pressure, which reduces the physiological effort required for the compression of air in the pharyngeal cavity-a unique articulatory component of ejective sounds. In addition, we hypothesize that ejective sounds may help to mitigate rates of water vapor loss through exhaled air. These explications demonstrate how a reduction of ambient air density could promote the usage of ejective phonemes in a given language. Our results reveal the direct influence of a geographic factor on the basic sound inventories of human languages.

Concepts: Mayan languages, Phonology, Phoneme, Linguistics, Language, International Phonetic Alphabet, Sign language, Tlingit language

169

Evidence from previous psycholinguistic research suggests that phonological units such as phonemes have a privileged role during phonological planning in Dutch and English (aka the segment-retrieval hypothesis). However, the syllable-retrieval hypothesis previously proposed for Mandarin assumes that only the entire syllable unit (without the tone) can be prepared in advance in speech planning. Using Cantonese Chinese as a test case, the present study was conducted to investigate whether the syllable-retrieval hypothesis can be applied to other Chinese spoken languages. In four implicit priming (form-preparation) experiments, participants were asked to learn various sets of prompt-response di-syllabic word pairs and to utter the corresponding response word upon seeing each prompt. The response words in a block were either phonologically related (homogeneous) or unrelated (heterogeneous). Participants' naming responses were significantly faster in the homogeneous than in the heterogeneous conditions when the response words shared the same word-initial syllable (without the tone) (Exps.1 and 4) or body (Exps.3 and 4), but not when they shared merely the same word-initial phoneme (Exp.2). Furthermore, the priming effect observed in the syllable-related condition was significantly larger than that in the body-related condition (Exp. 4). Although the observed syllable priming effects and the null effect of word-initial phoneme are consistent with the syllable-retrieval hypothesis, the body-related (sub-syllabic) priming effects obtained in this Cantonese study are not. These results suggest that the syllable-retrieval hypothesis is not generalizable to all Chinese spoken languages and that both syllable and sub-syllabic constituents are legitimate planning units in Cantonese speech production.

Concepts: Phonology, Phonotactics, Phoneme, Language, Morpheme, Tone, Chinese language, Allophone

167

Phonology and syntax represent two layers of sound combination central to language’s expressive power. Comparative animal studies represent one approach to understand the origins of these combinatorial layers. Traditionally, phonology, where meaningless sounds form words, has been considered a simpler combination than syntax, and thus should be more common in animals. A linguistically informed review of animal call sequences demonstrates that phonology in animal vocal systems is rare, whereas syntax is more widespread. In the light of this and the absence of phonology in some languages, we hypothesize that syntax, present in all languages, evolved before phonology.

Concepts: Bird, Phonology, Linguistics, Language, Semiotics, Natural language, Historical linguistics, August Schleicher

142

Two calling melodies of Polish were investigated, the routine call, used to call someone for an everyday reason, and the urgent call, which conveys disapproval of the addressee’s actions. A Discourse Completion Task was used to elicit the two melodies from Polish speakers using twelve names from one to four syllables long; there were three names per syllable count, and speakers produced three tokens of each name with each melody. The results, based on eleven speakers, show that the routine calling melody consists of a low F0 stretch followed by a rise-fall-rise; the urgent calling melody, on the other hand, is a simple rise-fall. Systematic differences were found in the scaling and alignment of tonal targets: the routine call showed late alignment of the accentual pitch peak, and in most instances lower scaling of targets. The accented vowel was also affected, being overall louder in the urgent call. Based on the data and comparisons with other Polish melodies, we analyze the routine call as LH* !H-H% and the urgent call as H* L-L%. We discuss the results and our analysis in light of recent findings on calling melodies in other languages, and explore their repercussions for intonational phonology and the modeling of intonation.

Concepts: Phonology, Phonotactics, Syllable, Language, International Phonetic Alphabet, Stress, Vowel, Mora

116

Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon.

Concepts: Psychology, Neuroscience, Phonology, Linguistics, Language, Word, Noam Chomsky, Roman Jakobson

84

The ability to generate new meaning by rearranging combinations of meaningless sounds is a fundamental component of language. Although animal vocalizations often comprise combinations of meaningless acoustic elements, evidence that rearranging such combinations generates functionally distinct meaning is lacking. Here, we provide evidence for this basic ability in calls of the chestnut-crowned babbler (Pomatostomus ruficeps), a highly cooperative bird of the Australian arid zone. Using acoustic analyses, natural observations, and a series of controlled playback experiments, we demonstrate that this species uses the same acoustic elements (A and B) in different arrangements (AB or BAB) to create two functionally distinct vocalizations. Specifically, the addition or omission of a contextually meaningless acoustic element at a single position generates a phoneme-like contrast that is sufficient to distinguish the meaning between the two calls. Our results indicate that the capacity to rearrange meaningless sounds in order to create new signals occurs outside of humans. We suggest that phonemic contrasts represent a rudimentary form of phoneme structure and a potential early step towards the generative phonemic system of human language.

Concepts: Human, Bird, Phonology, Acoustics, Italian language, Sound, Sign language, Element

62

Human listeners excel at forming high-level social representations about each other, even from the briefest of utterances. In particular, pitch is widely recognized as the auditory dimension that conveys most of the information about a speaker’s traits, emotional states, and attitudes. While past research has primarily looked at the influence of mean pitch, almost nothing is known about how intonation patterns, i.e., finely tuned pitch trajectories around the mean, may determine social judgments in speech. Here, we introduce an experimental paradigm that combines state-of-the-art voice transformation algorithms with psychophysical reverse correlation and show that two of the most important dimensions of social judgments, a speaker’s perceived dominance and trustworthiness, are driven by robust and distinguishing pitch trajectories in short utterances like the word “Hello,” which remained remarkably stable whether male or female listeners judged male or female speakers. These findings reveal a unique communicative adaptation that enables listeners to infer social traits regardless of speakers' physical characteristics, such as sex and mean pitch. By characterizing how any given individual’s mental representations may differ from this generic code, the method introduced here opens avenues to explore dysprosody and social-cognitive deficits in disorders like autism spectrum and schizophrenia. In addition, once derived experimentally, these prototypes can be applied to novel utterances, thus providing a principled way to modulate personality impressions in arbitrary speech signals.

Concepts: Male, Female, Gender, Sex, Phonology, Tone, Emotion, Speech processing

40

The ontogeny of linguistic functions in the human brain remains elusive. Although some auditory capacities are described before term, whether and how such immature cortical circuits might process speech are unknown. Here we used functional optical imaging to evaluate the cerebral responses to syllables at the earliest age at which cortical responses to external stimuli can be recorded in humans (28- to 32-wk gestational age). At this age, the cortical organization in layers is not completed. Many neurons are still located in the subplate and in the process of migrating to their final location. Nevertheless, we observed several points of similarity with the adult linguistic network. First, whereas syllables elicited larger right than left responses, the posterior temporal region escaped this general pattern, showing faster and more sustained responses over the left than over the right hemisphere. Second, discrimination responses to a change of phoneme (ba vs. ga) and a change of human voice (male vs. female) were already present and involved inferior frontal areas, even in the youngest infants (29-wk gestational age). Third, whereas both types of changes elicited responses in the right frontal region, the left frontal region only reacted to a change of phoneme. These results demonstrate a sophisticated organization of perisylvian areas at the very onset of cortical circuitry, 3 mo before term. They emphasize the influence of innate factors on regions involved in linguistic processing and social communication in humans.

Concepts: Nervous system, Neuron, Brain, Male, Left-wing politics, Human brain, Cerebral cortex, Phonology

28

OBJECTIVE: A growing body of evidence suggests that individuals with dyslexia perceive speech using allophonic rather than phonemic units and are thus sensitive to phonetic variations that are actually irrelevant in the ambient language. This study investigated speech perception difficulties in adults with dyslexia using behavioural and neural measurements with stimuli along a place-of-articulation continuum with well-defined allophonic boundaries. Adults without dyslexia served as control participants. METHODS: Categorical perception of a /bə - də/ place-of-articulation continuum was evaluated using both identification and discrimination tasks. In addition to these behavioural measures, mismatch negativity (MMN) was recorded for stimuli that came from either similar or different phoneme categories. RESULTS: The adults with dyslexia exhibited less consistent labelling than controls, but no heightened sensitivity to allophonic contrasts was observed at the behavioural level. Neural measurements revealed that stimuli from different phoneme categories elicited MMNs in both the adults with dyslexia and controls, whereas stimuli from the same category elicited an MMN in the adults with dyslexia only. CONCLUSION: The finding that adults with dyslexia have heightened sensitivity to allophonic contrasts in the form of neural activation supports the allophonic explanation of dyslexia. SIGNIFICANCE: Sensitivity to allophonic contrasts may be a valuable marker for dyslexia.

Concepts: Psychology, Phonology, Phoneme, Perception, Sense, Phonetics, Speech perception, Categorical perception

28

English-speaking children with Autism Spectrum Disorders (ASD) are less capable of using prosodic cues such as intonation for irony comprehension. Prosodic cues, in particular intonation, in Cantonese are relatively restricted while sentence-final particles (SFPs) may be used for this pragmatic function. This study investigated the use of prosodic cues and SFPs in irony comprehension in Cantonese-speaking children with and without ASD. Thirteen children with ASD (8;3-12;9) were language-matched with 13 typically developing (TD) peers. By manipulating prosodic cues and SFPs, 16 stories with an ironic remark were constructed. Participants had to judge the speaker’s belief and intention. Both groups performed similarly well in judging the speaker’s belief. For the speaker’s intention, the TD group relied more on SFPs. The ASD group performed significantly poorer and did not rely on either cue. SFPs may play a salient role in Cantonese irony comprehension. The differences between the two groups were discussed by considering the literature on theory of mind.

Concepts: Phonology, Autism, Prosody, Asperger syndrome, Intonation, Sociological and cultural aspects of autism, Play, Theory of mind