SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Semiotics

59

Language and speech are the primary source of data for psychiatrists to diagnose and treat mental disorders. In psychosis, the very structure of language can be disturbed, including semantic coherence (e.g., derailment and tangentiality) and syntactic complexity (e.g., concreteness). Subtle disturbances in language are evident in schizophrenia even prior to first psychosis onset, during prodromal stages. Using computer-based natural language processing analyses, we previously showed that, among English-speaking clinical (e.g., ultra) high-risk youths, baseline reduction in semantic coherence (the flow of meaning in speech) and in syntactic complexity could predict subsequent psychosis onset with high accuracy. Herein, we aimed to cross-validate these automated linguistic analytic methods in a second larger risk cohort, also English-speaking, and to discriminate speech in psychosis from normal speech. We identified an automated machine-learning speech classifier - comprising decreased semantic coherence, greater variance in that coherence, and reduced usage of possessive pronouns - that had an 83% accuracy in predicting psychosis onset (intra-protocol), a cross-validated accuracy of 79% of psychosis onset prediction in the original risk cohort (cross-protocol), and a 72% accuracy in discriminating the speech of recent-onset psychosis patients from that of healthy individuals. The classifier was highly correlated with previously identified manual linguistic predictors. Our findings support the utility and validity of automated natural language processing methods to characterize disturbances in semantics and syntax across stages of psychotic disorder. The next steps will be to apply these methods in larger risk cohorts to further test reproducibility, also in languages other than English, and identify sources of variability. This technology has the potential to improve prediction of psychosis outcome among at-risk youths and identify linguistic targets for remediation and preventive intervention. More broadly, automated linguistic analysis can be a powerful tool for diagnosis and treatment across neuropsychiatry.

Concepts: Linguistics, Language, Semantics, Schizophrenia, Psychiatry, Semiotics, Syntax, Natural language

47

Prior work decoding linguistic meaning from imaging data has been largely limited to concrete nouns, using similar stimuli for training and testing, from a relatively small number of semantic categories. Here we present a new approach for building a brain decoding system in which words and sentences are represented as vectors in a semantic space constructed from massive text corpora. By efficiently sampling this space to select training stimuli shown to subjects, we maximize the ability to generalize to new meanings from limited imaging data. To validate this approach, we train the system on imaging data of individual concepts, and show it can decode semantic vector representations from imaging data of sentences about a wide variety of both concrete and abstract topics from two separate datasets. These decoded representations are sufficiently detailed to distinguish even semantically similar sentences, and to capture the similarity structure of meaning relationships between sentences.

Concepts: Linguistics, Language, Grammar, Ontology, Semantics, Metaphysics, Semiotics, Decoder

43

The claim that Eskimo languages have words for different types of snow is well-known among the public, but has been greatly exaggerated through popularization and is therefore viewed with skepticism by many scholars of language. Despite the prominence of this claim, to our knowledge the line of reasoning behind it has not been tested broadly across languages. Here, we note that this reasoning is a special case of the more general view that language is shaped by the need for efficient communication, and we empirically test a variant of it against multiple sources of data, including library reference works, Twitter, and large digital collections of linguistic and meteorological data. Consistent with the hypothesis of efficient communication, we find that languages that use the same linguistic form for snow and ice tend to be spoken in warmer climates, and that this association appears to be mediated by lower communicative need to talk about snow and ice. Our results confirm that variation in semantic categories across languages may be traceable in part to local communicative needs. They suggest moreover that despite its awkward history, the topic of “words for snow” may play a useful role as an accessible instance of the principle that language supports efficient communication.

Concepts: Scientific method, Linguistics, Language, Empiricism, Semantics, Semiotics, Reason, Language family

37

In this paper we explore the results of a large-scale online game called ‘the Great Language Game’, in which people listen to an audio speech sample and make a forced-choice guess about the identity of the language from 2 or more alternatives. The data include 15 million guesses from 400 audio recordings of 78 languages. We investigate which languages are confused for which in the game, and if this correlates with the similarities that linguists identify between languages. This includes shared lexical items, similar sound inventories and established historical relationships. Our findings are, as expected, that players are more likely to confuse two languages that are objectively more similar. We also investigate factors that may affect players' ability to accurately select the target language, such as how many people speak the language, how often the language is mentioned in written materials and the economic power of the target language community. We see that non-linguistic factors affect players' ability to accurately identify the target. For example, languages with wider ‘global reach’ are more often identified correctly. This suggests that both linguistic and cultural knowledge influence the perception and recognition of languages and their similarity.

Concepts: Cognition, Linguistics, Language, Similarity, Semiotics, Translation, Word game

33

Recent studies have shown that concurrent physical activity enhances learning a completely unfamiliar L2 vocabulary as compared to learning it in a static condition. In this paper we report a study whose aim is twofold: to test for possible positive effects of physical activity when L2 learning has already reached some level of proficiency, and to test whether the assumed better performance when engaged in physical activity is limited to the linguistic level probed at training (i.e. L2 vocabulary tested by means of a Word-Picture Verification task), or whether it extends also to the sentence level (which was tested by means of a Sentence Semantic Judgment Task). The results show that Chinese speakers with basic knowledge of English benefited from physical activity while learning a set of new words. Furthermore, their better performance emerged also at the sentential level, as shown by their performance in a Semantic Judgment task. Finally, an interesting temporal asymmetry between the lexical and the sentential level emerges, with the difference between the experimental and control group emerging from the 1st testing session at the lexical level but after several weeks at the sentential level.

Concepts: Psychology, Linguistics, Language, Cycling, Test method, Learning, Knowledge, Semiotics

32

Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.

Concepts: Linguistics, Language, Semantics, Logic, Sign language, Semiotics, Natural language, August Schleicher

31

Research on the mental representation of human language has convincingly shown that sign languages are structured similarly to spoken languages. However, whether the same neurobiology underlies the online construction of complex linguistic structures in sign and speech remains unknown. To investigate this question with maximally controlled stimuli, we studied the production of minimal two-word phrases in sign and speech. Signers and speakers viewed the same pictures during magnetoencephalography recording and named them with semantically identical expressions. For both signers and speakers, phrase building engaged left anterior temporal and ventromedial cortices with similar timing, despite different linguistic articulators. Thus the neurobiological similarity of sign and speech goes beyond gross measures such as lateralization: the same fronto-temporal network achieves the planning of structured linguistic expressions.

Concepts: Neuroscience, Linguistics, Sentence, Language, Grammar, Sign language, Semiotics, Natural language

31

Indirect forms of speech, such as sarcasm, jocularity (joking), and ‘white lies’ told to spare another’s feelings, occur frequently in daily life and are a problem for many clinical populations. During social interactions, information about the literal or nonliteral meaning of a speaker unfolds simultaneously in several communication channels (e.g., linguistic, facial, vocal, and body cues); however, to date many studies have employed uni-modal stimuli, for example focusing only on the visual modality, limiting the generalizability of these results to everyday communication. Much of this research also neglects key factors for interpreting speaker intentions, such as verbal context and the relationship of social partners. Relational Inference in Social Communication (RISC) is a newly developed (English-language) database composed of short video vignettes depicting sincere, jocular, sarcastic, and white lie social exchanges between two people. Stimuli carefully manipulated the social relationship between communication partners (e.g., boss/employee, couple) and the availability of contextual cues (e.g. preceding conversations, physical objects) while controlling for major differences in the linguistic content of matched items. Here, we present initial perceptual validation data (N = 31) on a corpus of 920 items. Overall accuracy for identifying speaker intentions was above 80 % correct and our results show that both relationship type and verbal context influence the categorization of literal and nonliteral interactions, underscoring the importance of these factors in research on speaker intentions. We believe that RISC will prove highly constructive as a tool in future research on social cognition, inter-personal communication, and the interpretation of speaker intentions in both healthy adults and clinical populations.

Concepts: Psychology, Sociology, Perception, Communication, Semiotics, Lie, Rhetoric, Joke

30

Rhythm is a central characteristic of music and speech, the most important domains of human communication using acoustic signals. Here, we investigated how rhythmical patterns in music are processed in the human brain, and, in addition, evaluated the impact of musical training on rhythm processing. Using fMRI, we found that deviations from a rule-based regular rhythmic structure activated the left planum temporale together with Broca’s area and its right-hemispheric homolog across subjects, that is, a network also crucially involved in the processing of harmonic structure in music and the syntactic analysis of language. Comparing the BOLD responses to rhythmic variations between professional jazz drummers and musical laypersons, we found that only highly trained rhythmic experts show additional activity in left-hemispheric supramarginal gyrus, a higher-order region involved in processing of linguistic syntax. This suggests an additional functional recruitment of brain areas usually dedicated to complex linguistic syntax processing for the analysis of rhythmical patterns only in professional jazz drummers, who are especially trained to use rhythmical cues for communication.

Concepts: Brain, Greek loanwords, Wernicke's area, Language, Grammar, Music, Semiotics, Syntax

28

PURPOSE: The effects of Enhanced Milieu Teaching (EMT; Hancock & Kaiser, 2006) blended with Joint Attention, Symbolic Play and Emotional Regulation (JASPER; Kasari, Freeman, & Paparella, 2006) to teach spoken words and manual signs (Words + Signs) to young children with Down syndrome (DS) were evaluated in this study. METHOD: Four toddlers with Down syndrome between the ages of 23 and 29 months were enrolled in a multiple baseline design across participants study. Following baseline, twenty 20-30 minute play based treatment sessions occurred twice weekly. Spoken words and manual signs were modeled and prompted by a therapist using EMT/JASPER teaching strategies. Generalization to interactions with parents at home was assessed. RESULTS: There was a functional relation between the therapist’s implementation of EMT/JASPER Words + Signs and all four children’s use of signs during the intervention. Gradual increases in children’s use of spoken words occurred, but there was not a clear functional relation. All children generalized their use of signs to their parents at home. CONCLUSIONS: Infusing manual signs with verbal models within a framework of play, joint attention, and naturalistic language teaching appears to facilitate development of expressive sign and word communication in young children with DS.

Concepts: Psychology, Language, Word, Down syndrome, Programming language, Semiotics, Language education, Language school