Concept: Cognitive linguistics
We contrasted the predictive power of three measures of semantic richness-number of features (NFs), contextual dispersion (CD), and a novel measure of number of semantic neighbors (NSN)-for a large set of concrete and abstract concepts on lexical decision and naming tasks. NSN (but not NF) facilitated processing for abstract concepts, while NF (but not NSN) facilitated processing for the most concrete concepts, consistent with claims that linguistic information is more relevant for abstract concepts in early processing. Additionally, converging evidence from two datasets suggests that when NSN and CD are controlled for, the features that most facilitate processing are those associated with a concept’s physical characteristics and real-world contexts. These results suggest that rich linguistic contexts (many semantic neighbors) facilitate early activation of abstract concepts, whereas concrete concepts benefit more from rich physical contexts (many associated objects and locations).
In this paper, we propose a cognitive semantic approach to represent part-whole relations. We base our proposal on the theory of conceptual spaces, focusing on prototypical structures in part-whole relations. Prototypical structures are not accounted for in traditional mereological formalisms. In our account, parts and wholes are represented in distinct conceptual spaces; parts are joined to form wholes in a structure space. The structure space allows systematic similarity judgments between wholes, taking into consideration shared parts and their configurations. A point in the structure space denotes a particular part structure; regions in the space represent different general types of part structures. We argue that the structural space can represent prototype effects: structural types are formed around typical arrangements of parts. We also show how structure space captures the variations in part structure of a given concept across different domains. In addition, we discuss how some taxonomies of part-whole relations can be understood within our framework.
According to the feature-based model of semantic memory, concepts are described by a set of semantic features that contribute, with different weights, to the meaning of a concept. Interestingly, this theoretical framework has introduced numerous dimensions to describe semantic features. Recently, we proposed a new parameter to measure the importance of a semantic feature for the conceptual representation-that is, semantic significance. Here, with speeded verification tasks, we tested the predictive value of our index and investigated the relative roles of conceptual and featural dimensions on the participants' performance. The results showed that semantic significance is a good predictor of participants' verification latencies and suggested that it efficiently captures the salience of a feature for the computation of the meaning of a given concept. Therefore, we suggest that semantic significance can be considered an effective index of the importance of a feature in a given conceptual representation. Moreover, we propose that it may have straightforward implications for feature-based models of semantic memory, as an important additional factor for understanding conceptual representation.
Cognitive linguists claim that verb-particle constructions are compositional and analyzable, and that the particles contribute to the overall meaning in the form of image schemas. This article examined this claim with a behavioral experiment, in which participants were asked to judge the sensibility of short sentences primed by image-schematic pictures. Results showed that for sentences containing spatial VP constructions, the latency followed the order of “agreement primes [Formula: see text] neutral primes [Formula: see text] disagreement primes”, while for sentences of non-spatial VP constructions, the order was “neutral primes [Formula: see text] agreement primes [Formula: see text] disagreement primes”. This suggests that the activation of the corresponding image schemas influences both types of VP constructions, providing new evidence for the embodied account of language and thought. The different processing patterns between the spatial and non-spatial VP constructions are also discussed in the theoretical framework of Construction Grammar.
The prevailing approach to the neuroscientific study of concepts is to characterize the neural pattern evoked by a given concept, averaging over any variation that might occur upon multiple retrieval attempts (e.g., across time, tasks, or people). This approach-which diverges substantially from approaches to studying conceptual processing with other methods-treats all variation as noise. Here, our goal is to determine whether variation in neural patterns evoked by semantic retrieval of a given concept is more than just measurement error, and instead reflects variation arising from contextual variability. We measured each concept’s diversity of semantic contexts (“SV”) by analyzing its word frequency and co-occurrence statistics in large text corpora. To measure neural variability, we conducted an fMRI study and sampled neural activity associated with each concept when it appeared in three separate, randomized contexts. We predicted that concepts with low SV would exhibit uniform activation patterns across stimulus presentations, whereas concepts with high SV would exhibit more dynamic representations over time. We observed that a concept’s SV score predicted its corresponding neural variability. This finding supports a flexible, distributed organization of semantic memory, where a concept’s meaning and its neural activity patterns both continuously vary across contexts.
Biomedical information and knowledge, structural and non-structural, stored in different repositories can be semantically connected to form a hybrid knowledge network. How to compute relatedness between concepts and discover valuable but implicit information or knowledge from it effectively and efficiently is of paramount importance for precision medicine, and a major challenge facing the biomedical research community.
Conceptual processing may not be restricted to the mind. The heal concept has been metaphorically associated with an “up” bodily posture. Perceptual Symbol Systems (PSS) theory suggests that this association is underpinned by bodily states which occur during learning and become instantiated as the concept. Thus the aim of this study was to examine whether processing related to the heal concept is promoted by priming the bodily state of looking upwards.
Embodied cognition holds that abstract concepts are grounded in perceptual-motor simulations. If a given embodied metaphor maps onto a spatial representation, then thinking of that concept should bias the allocation of attention. In this study, we used positive and negative self-esteem words to examine two properties of conceptual cueing. First, we tested the orientation-specificity hypothesis, which predicts that conceptual cues should selectively activate certain spatial axes (in this case, valenced self-esteem concepts should activate vertical space), instead of any spatial continuum. Second, we tested whether conceptual cueing requires semantic processing, or if it can be achieved with shallow visual processing of the cue words. Participants viewed centrally presented words consisting of high or low self-esteem traits (e.g., brave, timid) before detecting a target above or below the cue in the vertical condition, or on the left or right of the word in the horizontal condition. Participants were faster to detect targets when their location was compatible with the valence of the word cues, but only in the vertical condition. Moreover, this effect was observed when participants processed the semantics of the word, but not when processing its orthography. The results show that conceptual cueing by spatial metaphors is orientation-specific, and that an explicit consideration of the word cues' semantics is required for conceptual cueing to occur.
We explored acceptability and feasibility of safer conception methods among HIV-affected couples in Uganda.
- IEEE transactions on visualization and computer graphics
- Published 12 days ago
Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.