Concept: Pattern recognition
Spike pattern classification is a key topic in machine learning, computational neuroscience, and electronic device design. Here, we offer a new supervised learning rule based on Support Vector Machines (SVM) to determine the synaptic weights of a leaky integrate-and-fire (LIF) neuron model for spike pattern classification. We compare classification performance between this algorithm and other methods sharing the same conceptual framework. We consider the effect of postsynaptic potential (PSP) kernel dynamics on patterns separability, and we propose an extension of the method to decrease computational load. The algorithm performs well in generalization tasks. We show that the peak value of spike patterns separability depends on a relation between PSP dynamics and spike pattern duration, and we propose a particular kernel that is well-suited for fast computations and electronic implementations.
Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.
High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance.
Evading detection by predators is crucial for survival. Camouflage is therefore a widespread adaptation, but despite substantial research effort our understanding of different camouflage strategies has relied predominantly on artificial systems and on experiments disregarding how camouflage is perceived by predators. Here we show for the first time in a natural system, that survival probability of wild animals is directly related to their level of camouflage as perceived by the visual systems of their main predators. Ground-nesting plovers and coursers flee as threats approach, and their clutches were more likely to survive when their egg contrast matched their surrounds. In nightjars - which remain motionless as threats approach - clutch survival depended on plumage pattern matching between the incubating bird and its surrounds. Our findings highlight the importance of pattern and luminance based camouflage properties, and the effectiveness of modern techniques in capturing the adaptive properties of visual phenotypes.
Pattern-based identity signatures are commonplace in the animal kingdom, but how they are recognized is poorly understood. Here we develop a computer vision tool for analysing visual patterns, NATUREPATTERNMATCH, which breaks new ground by mimicking visual and cognitive processes known to be involved in recognition tasks. We apply this tool to a long-standing question about the evolution of recognizable signatures. The common cuckoo (Cuculus canorus) is a notorious cheat that sneaks its mimetic eggs into nests of other species. Can host birds fight back against cuckoo forgery by evolving highly recognizable signatures? Using NATUREPATTERNMATCH, we show that hosts subjected to the best cuckoo mimicry have evolved the most recognizable egg pattern signatures. Theory predicts that effective pattern signatures should be simultaneously replicable, distinctive and complex. However, our results reveal that recognizable signatures need not incorporate all three of these features. Moreover, different hosts have evolved effective signatures in diverse ways.
Cluster analysis is aimed at classifying elements into categories on the basis of their similarity. Its applications range from astronomy to bioinformatics, bibliometrics, and pattern recognition. We propose an approach based on the idea that cluster centers are characterized by a higher density than their neighbors and by a relatively large distance from points with higher densities. This idea forms the basis of a clustering procedure in which the number of clusters arises intuitively, outliers are automatically spotted and excluded from the analysis, and clusters are recognized regardless of their shape and of the dimensionality of the space in which they are embedded. We demonstrate the power of the algorithm on several test cases.
Bloodstain pattern analysis (BPA) is the investigation and interpretation of blood deposited at crime scenes. However, the interaction of blood and apparel fabrics has not been widely studied. In this work, the development of bloodstains (passive, absorbed and transferred) dropped from three different heights (500, 1,000, 1,500 mm) on two cotton apparel fabrics (1 × 1 rib knit, drill) was investigated. High-speed video was used to investigate the interaction of the blood and fabric at impact. The effect of drop height on the development of passive, absorbed and transferred bloodstains was investigated using image analysis and statistical tools. Visually, the passive bloodstain patterns produced on the technical face of fabrics from the different drop heights were similar. The blood soaked unequally through to the technical rear of both fabrics. Very little blood was transferred between a bloody fabric and a second piece of fabric. Statistically, drop height did not affect the size of the parent bloodstain (wet or dry), but did affect the number of satellite bloodstains formed. Some differences between the two fabrics were noted, therefore fabric structure and properties must be considered when conducting BPA on apparel fabrics.
Impact of a three-dimensional “hands-on” anatomic teaching module on acetabular fracture pattern recognition by orthopaedic residents
- The Journal of bone and joint surgery. American volume
- Published almost 6 years ago
Much of the difficulty in understanding acetabular fracture patterns is due to the complex three-dimensional relationship of the acetabulum to the greater pelvis. We hypothesized that combining three-dimensional “hands-on” anatomic models with two-dimensional informational teaching sheets would improve the ability of orthopaedic residents to accurately classify acetabular fracture patterns and aid in preoperative surgical approach selection.
Extant neuroimaging data implicate frontoparietal and medial-temporal lobe regions in episodic retrieval, and the specific pattern of activity within and across these regions is diagnostic of an individual’s subjective mnemonic experience. For example, in laboratory-based paradigms, memories for recently encoded faces can be accurately decoded from single-trial fMRI patterns [Uncapher, M. R., Boyd-Meredith, J. T., Chow, T. E., Rissman, J., & Wagner, A. D. Goal-directed modulation of neural memory patterns: Implications for fMRI-based memory detection. Journal of Neuroscience, 35, 8531-8545, 2015; Rissman, J., Greely, H. T., & Wagner, A. D. Detecting individual memories through the neural decoding of memory states and past experience. Proceedings of the National Academy of Sciences, U.S.A., 107, 9849-9854, 2010]. Here, we investigated the neural patterns underlying memory for real-world autobiographical events, probed at 1- to 3-week retention intervals as well as whether distinct patterns are associated with different subjective memory states. For 3 weeks, participants ( n = 16) wore digital cameras that captured photographs of their daily activities. One week later, they were scanned while making memory judgments about sequences of photos depicting events from their own lives or events captured by the cameras of others. Whole-brain multivoxel pattern analysis achieved near-perfect accuracy at distinguishing correctly recognized events from correctly rejected novel events, and decoding performance did not significantly vary with retention interval. Multivoxel pattern analysis classifiers also differentiated recollection from familiarity and reliably decoded the subjective strength of recollection, of familiarity, or of novelty. Classification-based brain maps revealed dissociable neural signatures of these mnemonic states, with activity patterns in hippocampus, medial pFC, and ventral parietal cortex being particularly diagnostic of recollection. Finally, a classifier trained on previously acquired laboratory-based memory data achieved reliable decoding of autobiographical memory states. We discuss the implications for neuroscientific accounts of episodic retrieval and comment on the potential forensic use of fMRI for probing experiential knowledge.
Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer.