The articular release of the metacarpophalangeal joint produces a typical cracking sound, resulting in what is commonly referred to as the cracking of knuckles. Despite over sixty years of research, the source of the knuckle cracking sound continues to be debated due to inconclusive experimental evidence as a result of limitations in the temporal resolution of non-invasive physiological imaging techniques. To support the available experimental data and shed light onto the source of the cracking sound, we have developed a mathematical model of the events leading to the generation of the sound. The model resolves the dynamics of a collapsing cavitation bubble in the synovial fluid inside a metacarpophalangeal joint during an articular release. The acoustic signature from the resulting bubble dynamics is shown to be consistent in both magnitude and dominant frequency with experimental measurements in the literature and with our own experiments, thus lending support for cavitation bubble collapse as the source of the cracking sound. Finally, the model also shows that only a partial collapse of the bubble is needed to replicate the experimentally observed acoustic spectra, thus allowing for bubbles to persist following the generation of sound as has been reported in recent experiments.
We demonstrate a new optical approach to generate high-frequency (>15 MHz) and high-amplitude focused ultrasound, which can be used for non-invasive ultrasound therapy. A nano-composite film of carbon nanotubes (CNTs) and elastomeric polymer is formed on concave lenses, and used as an efficient optoacoustic source due to the high optical absorption of the CNTs and rapid heat transfer to the polymer upon excitation by pulsed laser irradiation. The CNT-coated lenses can generate unprecedented optoacoustic pressures of >50 MPa in peak positive on a tight focal spot of 75 μm in lateral and 400 μm in axial widths. This pressure amplitude is remarkably high in this frequency regime, producing pronounced shock effects and non-thermal pulsed cavitation at the focal zone. We demonstrate that the optoacoustic lens can be used for micro-scale ultrasonic fragmentation of solid materials and a single-cell surgery in terms of removing the cells from substrates and neighboring cells.
Delphinids produce large numbers of short duration, broadband echolocation clicks which may be useful for species classification in passive acoustic monitoring efforts. A challenge in echolocation click classification is to overcome the many sources of variability to recognize underlying patterns across many detections. An automated unsupervised network-based classification method was developed to simulate the approach a human analyst uses when categorizing click types: Clusters of similar clicks were identified by incorporating multiple click characteristics (spectral shape and inter-click interval distributions) to distinguish within-type from between-type variation, and identify distinct, persistent click types. Once click types were established, an algorithm for classifying novel detections using existing clusters was tested. The automated classification method was applied to a dataset of 52 million clicks detected across five monitoring sites over two years in the Gulf of Mexico (GOM). Seven distinct click types were identified, one of which is known to be associated with an acoustically identifiable delphinid (Risso’s dolphin) and six of which are not yet identified. All types occurred at multiple monitoring locations, but the relative occurrence of types varied, particularly between continental shelf and slope locations. Automatically-identified click types from autonomous seafloor recorders without verifiable species identification were compared with clicks detected on sea-surface towed hydrophone arrays in the presence of visually identified delphinid species. These comparisons suggest potential species identities for the animals producing some echolocation click types. The network-based classification method presented here is effective for rapid, unsupervised delphinid click classification across large datasets in which the click types may not be known a priori.
How bats adapt their sonar behavior to accommodate the noisiness of a crowded day roost is a mystery. Some bats change their pulse acoustics to enhance the distinction between theirs and another bat’s echoes, but additional mechanisms are needed to explain the bat sonar system’s exceptional resilience to jamming by conspecifics. Variable pulse repetition rate strategies offer one potential solution to this dynamic problem, but precisely how changes in pulse rate could improve sonar performance in social settings is unclear. Here we show that bats decrease their emission rates as population density increases, following a pattern that reflects a cumulative mutual suppression of each other’s pulse emissions. Playback of artificially-generated echolocation pulses similarly slowed emission rates, demonstrating that suppression was mediated by hearing the pulses of other bats. Slower emission rates did not support an antiphonal emission strategy but did reduce the relative proportion of emitted pulses that overlapped with another bat’s emissions, reducing the relative rate of mutual interference. The prevalence of acoustic interferences occurring amongst bats was empirically determined to be a linear function of population density and mean emission rates. Consequently as group size increased, small reductions in emission rates spread across the group partially mitigated the increase in interference rate. Drawing on lessons learned from communications networking theory we show how modest decreases in pulse emission rates can significantly increase the net information throughput of the shared acoustic space, thereby improving sonar efficiency for all individuals in a group. We propose that an automated acoustic suppression of pulse emissions triggered by bats hearing each other’s emissions dynamically optimizes sonar efficiency for the entire group.
Echolocating bats use the time elapsed from biosonar pulse emission to the arrival of echo (defined as echo-delay) to assess target-distance. Target-distance is represented in the brain by delay-tuned neurons that are classified as either “heteroharmonic” or “homoharmormic.” Heteroharmonic neurons respond more strongly to pulse-echo pairs in which the timing of the pulse is given by the fundamental biosonar harmonic while the timing of echoes is provided by one (or several) of the higher order harmonics. On the other hand, homoharmonic neurons are tuned to the echo delay between similar harmonics in the emitted pulse and echo. It is generally accepted that heteroharmonic computations are advantageous over homoharmonic computations; i.e., heteroharmonic neurons receive information from call and echo in different frequency-bands which helps to avoid jamming between pulse and echo signals. Heteroharmonic neurons have been found in two species of the family Mormoopidae (Pteronotus parnellii and Pteronotus quadridens) and in Rhinolophus rouxi. Recently, it was proposed that heteroharmonic target-range computations are a primitive feature of the genus Pteronotus that was preserved in the evolution of the genus. Here, we review recent findings on the evolution of echolocation in Mormoopidae, and try to link those findings to the evolution of the heteroharmonic computation strategy (HtHCS). We stress the hypothesis that the ability to perform heteroharmonic computations evolved separately from the ability of using long constant-frequency echolocation calls, high duty cycle echolocation, and Doppler Shift Compensation. Also, we present the idea that heteroharmonic computations might have been of advantage for categorizing prey size, hunting eared insects, and living in large conspecific colonies. We make five testable predictions that might help future investigations to clarify the evolution of the heteroharmonic echolocation in Mormoopidae and other families.
To solve the problem that MEMS vector hydrophones are greatly interfered with by the vibration of the platform and flow noise in applications, this paper describes a differential MEMS vector hydrophone that could simultaneously receive acoustic signals and reject acceleration signals. Theoretical and simulation analyses have been carried out. Lastly, a prototype of the differential MEMS vector hydrophone has been created and tested using a standing wave tube and a vibration platform. The results of the test show that this hydrophone has a high sensitivity, Mv = -185 dB (@ 500 Hz, 0 dB reference 1 V/μPa), which is almost the same as the previous MEMS vector hydrophones, and has a low acceleration sensitivity, Mv = -58 dB (0 dB reference 1 V/g), which has decreased by 17 dB compared with the previous MEMS vector hydrophone. The differential MEMS vector hydrophone basically meets the requirements of acoustic vector detection when it is rigidly fixed to a working platform, which lays the foundation for engineering applications of MEMS vector hydrophones.
Under natural conditions, animals encounter a barrage of sensory information from which they must select and interpret biologically relevant signals. Active sensing can facilitate this process by engaging motor systems in the sampling of sensory information. The echolocating bat serves as an excellent model to investigate the coupling between action and sensing because it adaptively controls both the acoustic signals used to probe the environment and movements to receive echoes at the auditory periphery. We report here that the echolocating bat controls the features of its sonar vocalizations in tandem with the positioning of the outer ears to maximize acoustic cues for target detection and localization. The bat’s adaptive control of sonar vocalizations and ear positioning occurs on a millisecond timescale to capture spatial information from arriving echoes, as well as on a longer timescale to track target movement. Our results demonstrate that purposeful control over sonar sound production and reception can serve to improve acoustic cues for localization tasks. This finding also highlights the general importance of movement to sensory processing across animal species. Finally, our discoveries point to important parallels between spatial perception by echolocation and vision.
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour.
Most marine mammal- strandings coincident with naval sonar exercises have involved Cuvier’s beaked whales (Ziphius cavirostris). We recorded animal movement and acoustic data on two tagged Ziphius and obtained the first direct measurements of behavioural responses of this species to mid-frequency active (MFA) sonar signals. Each recording included a 30-min playback (one 1.6-s simulated MFA sonar signal repeated every 25 s); one whale was also incidentally exposed to MFA sonar from distant naval exercises. Whales responded strongly to playbacks at low received levels (RLs; 89-127 dB re 1 µPa): after ceasing normal fluking and echolocation, they swam rapidly, silently away, extending both dive duration and subsequent non-foraging interval. Distant sonar exercises (78-106 dB re 1 µPa) did not elicit such responses, suggesting that context may moderate reactions. The observed responses to playback occurred at RLs well below current regulatory thresholds; equivalent responses to operational sonars could elevate stranding risk and reduce foraging efficiency.
Recordings of narwhal (Monodon monoceros) echolocation signals were made using a linear 16 hydrophone array in the pack ice of Baffin Bay, West Greenland in 2013 at eleven sites. An average -3 dB beam width of 5.0° makes the narwhal click the most directional biosonar signal reported for any species to date. The beam shows a dorsal-ventral asymmetry with a narrower beam above the beam axis. This may be an evolutionary advantage for toothed whales to reduce echoes from the water surface or sea ice surface. Source level measurements show narwhal click intensities of up to 222 dB pp re 1 μPa, with a mean apparent source level of 215 dB pp re 1 μPa. During ascents and descents the narwhals perform scanning in the vertical plane with their sonar beam. This study provides valuable information for reference sonar parameters of narwhals and for the use of acoustic monitoring in the Arctic.