We present evidence that the geographic context in which a language is spoken may directly impact its phonological form. We examined the geographic coordinates and elevations of 567 language locations represented in a worldwide phonetic database. Languages with phonemic ejective consonants were found to occur closer to inhabitable regions of high elevation, when contrasted to languages without this class of sounds. In addition, the mean and median elevations of the locations of languages with ejectives were found to be comparatively high. The patterns uncovered surface on all major world landmasses, and are not the result of the influence of particular language families. They reflect a significant and positive worldwide correlation between elevation and the likelihood that a language employs ejective phonemes. In addition to documenting this correlation in detail, we offer two plausible motivations for its existence. We suggest that ejective sounds might be facilitated at higher elevations due to the associated decrease in ambient air pressure, which reduces the physiological effort required for the compression of air in the pharyngeal cavity-a unique articulatory component of ejective sounds. In addition, we hypothesize that ejective sounds may help to mitigate rates of water vapor loss through exhaled air. These explications demonstrate how a reduction of ambient air density could promote the usage of ejective phonemes in a given language. Our results reveal the direct influence of a geographic factor on the basic sound inventories of human languages.
Evidence from previous psycholinguistic research suggests that phonological units such as phonemes have a privileged role during phonological planning in Dutch and English (aka the segment-retrieval hypothesis). However, the syllable-retrieval hypothesis previously proposed for Mandarin assumes that only the entire syllable unit (without the tone) can be prepared in advance in speech planning. Using Cantonese Chinese as a test case, the present study was conducted to investigate whether the syllable-retrieval hypothesis can be applied to other Chinese spoken languages. In four implicit priming (form-preparation) experiments, participants were asked to learn various sets of prompt-response di-syllabic word pairs and to utter the corresponding response word upon seeing each prompt. The response words in a block were either phonologically related (homogeneous) or unrelated (heterogeneous). Participants' naming responses were significantly faster in the homogeneous than in the heterogeneous conditions when the response words shared the same word-initial syllable (without the tone) (Exps.1 and 4) or body (Exps.3 and 4), but not when they shared merely the same word-initial phoneme (Exp.2). Furthermore, the priming effect observed in the syllable-related condition was significantly larger than that in the body-related condition (Exp. 4). Although the observed syllable priming effects and the null effect of word-initial phoneme are consistent with the syllable-retrieval hypothesis, the body-related (sub-syllabic) priming effects obtained in this Cantonese study are not. These results suggest that the syllable-retrieval hypothesis is not generalizable to all Chinese spoken languages and that both syllable and sub-syllabic constituents are legitimate planning units in Cantonese speech production.
This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics.
- Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology
- Published over 7 years ago
OBJECTIVE: A growing body of evidence suggests that individuals with dyslexia perceive speech using allophonic rather than phonemic units and are thus sensitive to phonetic variations that are actually irrelevant in the ambient language. This study investigated speech perception difficulties in adults with dyslexia using behavioural and neural measurements with stimuli along a place-of-articulation continuum with well-defined allophonic boundaries. Adults without dyslexia served as control participants. METHODS: Categorical perception of a /bə - də/ place-of-articulation continuum was evaluated using both identification and discrimination tasks. In addition to these behavioural measures, mismatch negativity (MMN) was recorded for stimuli that came from either similar or different phoneme categories. RESULTS: The adults with dyslexia exhibited less consistent labelling than controls, but no heightened sensitivity to allophonic contrasts was observed at the behavioural level. Neural measurements revealed that stimuli from different phoneme categories elicited MMNs in both the adults with dyslexia and controls, whereas stimuli from the same category elicited an MMN in the adults with dyslexia only. CONCLUSION: The finding that adults with dyslexia have heightened sensitivity to allophonic contrasts in the form of neural activation supports the allophonic explanation of dyslexia. SIGNIFICANCE: Sensitivity to allophonic contrasts may be a valuable marker for dyslexia.
Research on shared reading has shown positive results on children’s literacy development in general and for deaf children specifically; however, reading techniques might differ between these two populations. Families with deaf children, especially those with deaf parents, often capitalize on their children’s visual attributes rather than primarily auditory cues. These techniques are believed to provide a foundation for their deaf children’s literacy skills. This study examined 10 deaf mother/deaf child dyads with children between 3 and 5 years of age. Dyads were videotaped in their homes on at least two occasions reading books that were provided by the researcher. Descriptive analysis showed specifically how deaf mothers mediate between the two languages, American Sign Language (ASL) and English, while reading. These techniques can be replicated and taught to all parents of deaf children so that they can engage in more effective shared reading activities. Research has shown that shared reading, or the interaction of a parent and child with a book, is an effective way to promote language and literacy, vocabulary, grammatical knowledge, and metalinguistic awareness (Snow, 1983), making it critical for educators to promote shared reading activities at home between parent and child. Not all parents read to their children in the same way. For example, parents of deaf children may present the information in the book differently due to the fact that signed languages are visual rather than spoken. In this vein, we can learn more about what specific connections deaf parents make to the English print. Exploring strategies deaf mothers may use to link the English print through the use of ASL will provide educators with additional tools when working with all parents of deaf children. This article will include a review of the literature on the benefits of shared reading activities for all children, the relationship between ASL and English skill development, and the techniques deaf parents use when reading with their deaf children. Following this is the presentation of a study that was conducted on specific techniques deaf parents use in bridging ASL to English as they read with their deaf children.
Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.
The authors analyzed the spellings of 179 U.S. children (age = 3 years, 2 months-5 years, 6 months) who were prephonological spellers, in that they wrote using letters that did not reflect the phonemes in the target items. Supporting the idea that children use their statistical learning skills to learn about the outer form of writing before they begin to spell phonologically, older prephonological spellers showed more knowledge about English letter patterns than did younger prephonological spellers. The written productions of older prephonological spellers were rated by adults as more similar to English words than were the productions of younger prephonological spellers. The older children s spellings were also more wordlike on several objective measures, including length, variability of letters within words, and digram frequency.
Studies have shown that American Sign Language (ASL) fluency has a positive impact on deaf individuals' English reading, but the cognitive and cross-linguistic mechanisms permitting the mapping of a visual-manual language onto a sound-based language have yet to be elucidated. Fingerspelling, which represents English orthography with 26 distinct hand configurations, is an integral part of ASL and has been suggested to provide deaf bilinguals with important cross-linguistic links between sign language and orthography. Using a hierarchical multiple regression analysis, this study examined the relationship of age of ASL exposure, ASL fluency, and fingerspelling skill on reading fluency in deaf college-age bilinguals. After controlling for ASL fluency, fingerspelling skill significantly predicted reading fluency, revealing for the first-time that fingerspelling, above and beyond ASL skills, contributes to reading fluency in deaf bilinguals. We suggest that both fingerspelling-in the visual-manual modality-and reading-in the visual-orthographic modality-are mutually facilitating because they share common underlying cognitive capacities of word decoding accuracy and automaticity of word recognition. The findings provide support for the hypothesis that the development of English reading proficiency may be facilitated through strengthening of the relationship among fingerspelling, sign language, and orthographic decoding en route to reading mastery, and may also reveal optimal approaches for reading instruction for deaf and hard of hearing children.
Reading has been shown to rely on a dorsal brain circuit involving the temporoparietal cortex (TPC) for grapheme-to-phoneme conversion of novel words (Pugh et al., 2001), and a ventral stream involving left occipitotemporal cortex (OTC) (in particular in the so-called “visual word form area”, VWFA) for visual identification of familiar words. In addition, portions of the inferior frontal cortex (IFC) have been posited to be an output of the dorsal reading pathway involved in phonology. While this dorsal versus ventral dichotomy for phonological and orthographic processing of words is widely accepted, it is not known if these brain areas are actually strictly sensitive to orthographic or phonological information. Using an fMRI rapid adaptation technique we probed the selectivity of the TPC, OTC, and IFC to orthographic and phonological features during single word reading. We found in two independent experiments using different task conditions in adult normal readers, that the TPC is exclusively sensitive to phonology and the VWFA in the OTC is exclusively sensitive to orthography. The dorsal IFC (BA 44), however, showed orthographic but not phonological selectivity. These results support the theory that reading involves a specific phonological-based temporoparietal region and a specific orthographic-based ventral occipitotemporal region. The dorsal IFC, however, was not sensitive to phonological processing, suggesting a more complex role for this region.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 5 years ago
Worldwide patterns of genetic variation are driven by human demographic history. Here, we test whether this demographic history has left similar signatures on phonemes-sound units that distinguish meaning between words in languages-to those it has left on genes. We analyze, jointly and in parallel, phoneme inventories from 2,082 worldwide languages and microsatellite polymorphisms from 246 worldwide populations. On a global scale, both genetic distance and phonemic distance between populations are significantly correlated with geographic distance. Geographically close language pairs share significantly more phonemes than distant language pairs, whether or not the languages are closely related. The regional geographic axes of greatest phonemic differentiation correspond to axes of genetic differentiation, suggesting that there is a relationship between human dispersal and linguistic variation. However, the geographic distribution of phoneme inventory sizes does not follow the predictions of a serial founder effect during human expansion out of Africa. Furthermore, although geographically isolated populations lose genetic diversity via genetic drift, phonemes are not subject to drift in the same way: within a given geographic radius, languages that are relatively isolated exhibit more variance in number of phonemes than languages with many neighbors. This finding suggests that relatively isolated languages are more susceptible to phonemic change than languages with many neighbors. Within a language family, phoneme evolution along genetic, geographic, or cognate-based linguistic trees predicts similar ancestral phoneme states to those predicted from ancient sources. More genetic sampling could further elucidate the relative roles of vertical and horizontal transmission in phoneme evolution.