Concept: Sensory substitution
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour.
Sensory substitution devices (SSDs) aim to compensate for the loss of a sensory modality, typically vision, by converting information from the lost modality into stimuli in a remaining modality. “The vOICe” is a visual-to-auditory SSD which encodes images taken by a camera worn by the user into “soundscapes” such that experienced users can extract information about their surroundings. Here we investigated how much detail was resolvable during the early induction stages by testing the acuity of blindfolded sighted, naïve vOICe users. Initial performance was well above chance. Participants who took the test twice as a form of minimal training showed a marked improvement on the second test. Acuity was slightly but not significantly impaired when participants wore a camera and judged letter orientations “live”. A positive correlation was found between participants' musical training and their acuity. The relationship between auditory expertise via musical training and the lack of a relationship with visual imagery, suggests that early use of a SSD draws primarily on the mechanisms of the sensory modality being used rather than the one being substituted. If vision is lost, audition represents the sensory channel of highest bandwidth of those remaining. The level of acuity found here, and the fact it was achieved with very little experience in sensory substitution by naïve users is promising.
Certain blind individuals have learned to interpret the echoes of self-generated sounds to perceive the structure of objects in their environment. The current work examined how far the influence of this unique form of sensory substitution extends by testing whether echolocation-induced representations of object size could influence weight perception. A small group of echolocation experts made tongue clicks or finger snaps toward cubes of varying sizes and weights before lifting them. These echolocators experienced a robust size-weight illusion. This experiment provides the first demonstration of a sensory substitution technique whereby the substituted sense influences the conscious perception through an intact sense.
Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities.
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system’s ability to suppress distracting sound reflections, whereas echolocation would benefit from “unsuppressing” these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group’s superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person’s experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people’s experience may offset age-related decline in ITD sensitivity.
Some people can echolocate by making sonar emissions (e.g., mouth-clicks, finger snaps, feet shuffling, humming, cane tapping, etc.) and listening to the returning echoes. To date there are no statistics available about how many blind people use echolocation, but anecdotal reports in the literature suggest that perhaps between 20 and 30% of totally blind people may use it, suggesting that echolocation affords broad functional benefits. Consistent with the notion that blind individuals benefit from the use of echolocation, previous research conducted under controlled experimental conditions has shown that echolocation improves blind people’s spatial sensing ability. The current study investigated if there is also evidence for functional benefits of echolocation in real life. To address this question the current study conducted an online survey. Thirty-seven blind people participated. Linear regression analyses of survey data revealed that, while statistically controlling for participants' gender, age, level of visual function, general health, employment status, level of education, Braille skill, and use of other mobility means, people who use echolocation have higher salary, and higher mobility in unfamiliar places, than people who do not use echolocation. The majority of our participants (34 out of 37) use the long cane, and all participants who reported to echolocate, also reported to use the long cane. This suggests that the benefit of echolocation that we found might be conditional upon the long cane being used as well. The investigation was correlational in nature, and thus cannot be used to determine causality. In addition, the sample was small (N = 37), and one should be cautious when generalizing the current results to the population. The data, however, are consistent with the idea that echolocation offers real-life advantages for blind people, and that echolocation may be involved in peoples' successful adaptation to vision loss.
Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using “soundscapes”-sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology.
Sensory substitution devices aim at replacing or assisting one or several functions of a deficient sensory modality by means of another sensory modality. Despite the numerous studies and research programs devoted to their development and integration, sensory substitution devices have failed to live up to their goal of allowing one to “see with the skin” (White et al., 1970) or to “see with the brain” (Bach-y-Rita et al., 2003). These somewhat peremptory claims, as well as the research conducted so far, are based on an implicit perceptual paradigm. Such perceptual assumption accepts the equivalence between using a sensory substitution device and perceiving through a particular sensory modality. Our aim is to provide an alternative model, which defines sensory substitution as being closer to culturally implemented cognitive extensions of existing perceptual skills such as reading. In this article, we will show why the analogy with reading provides a better explanation of the actual findings, that is, both of the positive results achieved and of the limitations noticed across the field of research on sensory substitution. The parallel with the most recent two-route and interactive models of reading (e.g., Dehaene et al., 2005) generates a radically new way of approaching these results, by stressing the dependence of integration on the existing perceptual-semantic route. In addition, the present perspective enables us to generate innovative research questions and specific predictions which set the stage for future work.
- Wiley interdisciplinary reviews. Cognitive science
- Published over 3 years ago
Bats and dolphins are known for their ability to use echolocation. They emit bursts of sounds and listen to the echoes that bounce back to detect the objects in their environment. What is not as well-known is that some blind people have learned to do the same thing, making mouth clicks, for example, and using the returning echoes from those clicks to sense obstacles and objects of interest in their surroundings. The current review explores some of the research that has examined human echolocation and the changes that have been observed in the brains of echolocation experts. We also discuss potential applications and assistive technology based on echolocation. Blind echolocation experts can sense small differences in the location of objects, differentiate between objects of various sizes and shapes, and even between objects made of different materials, just by listening to the reflected echoes from mouth clicks. It is clear that echolocation may enable some blind people to do things that are otherwise thought to be impossible without vision, potentially providing them with a high degree of independence in their daily lives and demonstrating that echolocation can serve as an effective mobility strategy in the blind. Neuroimaging has shown that the processing of echoes activates brain regions in blind echolocators that would normally support vision in the sighted brain, and that the patterns of these activations are modulated by the information carried by the echoes. This work is shedding new light on just how plastic the human brain is. For further resources related to this article, please visit the WIREs website.