The link between object perception and neural activity in visual cortical areas is a problem of fundamental importance in neuroscience. Here we show that electrical potentials from the ventral temporal cortical surface in humans contain sufficient information for spontaneous and near-instantaneous identification of a subject’s perceptual state. Electrocorticographic (ECoG) arrays were placed on the subtemporal cortical surface of seven epilepsy patients. Grayscale images of faces and houses were displayed rapidly in random sequence. We developed a template projection approach to decode the continuous ECoG data stream spontaneously, predicting the occurrence, timing and type of visual stimulus. In this setting, we evaluated the independent and joint use of two well-studied features of brain signals, broadband changes in the frequency power spectrum of the potential and deflections in the raw potential trace (event-related potential; ERP). Our ability to predict both the timing of stimulus onset and the type of image was best when we used a combination of both the broadband response and ERP, suggesting that they capture different and complementary aspects of the subject’s perceptual state. Specifically, we were able to predict the timing and type of 96% of all stimuli, with less than 5% false positive rate and a ~20ms error in timing.
Dogs are particularly skilful during communicative interactions with humans. Dogs' abilities to use human communicative cues in cooperative contexts outcompete those of other species, and might be the result of selection pressures during domestication. Dogs also produce signals to direct the attention of humans towards outside entities, a behaviour often referred to as showing behaviour. This showing behaviour in dogs is thought to be something dogs use intentionally and referentially. However, there is currently no evidence that dogs communicate helpfully, i.e. to inform an ignorant human about a target that is of interest to the human but not to the dog. Communicating with a helpful motive is particularly interesting because it might suggest that dogs understand the human’s goals and need for information. In study 1, we assessed whether dogs would abandon an object that they find interesting in favour of an object useful for their human partner, a random novel distractor, or an empty container. Results showed that it was mainly self-interest that was driving the dogs' behaviour. The dogs mainly directed their behaviour towards the object they had an interest in, but dogs were more persistent when showing the object relevant to the human, suggesting that to some extent they took the humans interest into account. Another possibility is that dogs' behaviour was driven by an egocentric motivation to interact with novel targets and that the dogs' neophila might have masked their helpful tendencies. Therefore, in study 2 the dogs had initial access to both objects, and were expected to indicate only one (relevant or distractor). The human partner interacted with the dog using vocal communication in half of the trials, and remaining silent in the other half. Dogs from both experimental groups, i.e. indicating the relevant object or indicating the distractor, established joint attention with the human. However, the human’s vocal communication and the presence of the object relevant to the human increased the persistency of showing, supporting the hypothesis that the dogs understood the objects' relevance to the human. We propose two non-exclusive explanations. These results might suggest that informative motives could possibly underlie dogs' showing. It is also possible that dogs might have indicated the location of the hidden object because they recognised it as the target of the human’s search. This would be consistent with taking into account the objects' relevance, without necessarily implying that the dogs understood the human’s state of knowledge.
The “just noticeable difference” (JND) represents the minimum amount by which a stimulus must change to produce a noticeable variation in one’s perceptual experience (i.e., Weber’s law). Recent work has shown that within-participant standard deviations of grip aperture (i.e., JNDs) increase linearly with increasing object size during the early, but not the late, stages of goal-directed grasping. A visually based explanation for this finding is that the early and late stages of grasping are respectively mediated by relative and absolute visual information and therefore render a time-dependent adherence to Weber’s law. Alternatively, a motor-based explanation contends that the larger aperture shaping impulses required for larger objects gives rise to a stochastic increase in the variability of motor output (i.e., impulse-variability hypothesis). To test the second explanation, we had participants grasp differently sized objects in grasping time criteria of 400 and 800 ms. Thus, the 400 ms condition required larger aperture shaping impulses than the 800 ms condition. In line with previous work, JNDs during early aperture shaping (i.e., at the time of peak aperture acceleration and peak aperture velocity) for both the 400 and 800 ms conditions scaled linearly with object size, whereas JNDs later in the response (i.e., at the time of peak grip aperture) did not. Moreover, the 400 and 800 ms conditions produced comparable slopes relating JNDs to object size. In other words, larger aperture shaping impulses did not give rise to a stochastic increase in aperture variability at each object size. As such, the theoretical tenets of the impulse-variability hypothesis do not provide a viable framework for the time-dependent scaling of JNDs to object size. Instead, we propose that a dynamic interplay between relative and absolute visual information gives rise to grasp trajectories that exhibit an early adherence and late violation to Weber’s law.
We contrasted the predictive power of three measures of semantic richness-number of features (NFs), contextual dispersion (CD), and a novel measure of number of semantic neighbors (NSN)-for a large set of concrete and abstract concepts on lexical decision and naming tasks. NSN (but not NF) facilitated processing for abstract concepts, while NF (but not NSN) facilitated processing for the most concrete concepts, consistent with claims that linguistic information is more relevant for abstract concepts in early processing. Additionally, converging evidence from two datasets suggests that when NSN and CD are controlled for, the features that most facilitate processing are those associated with a concept’s physical characteristics and real-world contexts. These results suggest that rich linguistic contexts (many semantic neighbors) facilitate early activation of abstract concepts, whereas concrete concepts benefit more from rich physical contexts (many associated objects and locations).
BACKGROUND: Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. RESULTS: In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines” that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about boneosteoclast interaction, which underline the advantages of the object-based concept. CONCLUSIONS: We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1392065570891113.
Additive manufacturing processes such as 3D printing use time-consuming, stepwise layer-by-layer approaches to object fabrication. We demonstrate the continuous generation of monolithic polymeric parts up to tens of centimeters in size with feature resolution below 100 micrometers. Continuous liquid interface production is achieved with an oxygen-permeable window below the ultraviolet image projection plane, which creates a “dead zone” (persistent liquid interface) where photopolymerization is inhibited between the window and the polymerizing part. We delineate critical control parameters and show that complex solid parts can be drawn out of the resin at rates of hundreds of millimeters per hour. These print speeds allow parts to be produced in minutes instead of hours.
The ability to identify and retain logical relations between stimuli and apply them to novel stimuli is known as relational concept learning. This has been demonstrated in a few animal species after extensive reinforcement training, and it reveals the brain’s ability to deal with abstract properties. Here we describe relational concept learning in newborn ducklings without reinforced training. Newly hatched domesticated mallards that were briefly exposed to a pair of objects that were either the same or different in shape or color later preferred to follow pairs of new objects exhibiting the imprinted relation. Thus, even in a seemingly rigid and very rapid form of learning such as filial imprinting, the brain operates with abstract conceptual reasoning, a faculty often assumed to be reserved to highly intelligent organisms.
Hand loss is a highly disabling event that markedly affects the quality of life. To achieve a close to natural replacement for the lost hand, the user should be provided with the rich sensations that we naturally perceive when grasping or manipulating an object. Ideal bidirectional hand prostheses should involve both a reliable decoding of the user’s intentions and the delivery of nearly “natural” sensory feedback through remnant afferent pathways, simultaneously and in real time. However, current hand prostheses fail to achieve these requirements, particularly because they lack any sensory feedback. We show that by stimulating the median and ulnar nerve fascicles using transversal multichannel intrafascicular electrodes, according to the information provided by the artificial sensors from a hand prosthesis, physiologically appropriate (near-natural) sensory information can be provided to an amputee during the real-time decoding of different grasping tasks to control a dexterous hand prosthesis. This feedback enabled the participant to effectively modulate the grasping force of the prosthesis with no visual or auditory feedback. Three different force levels were distinguished and consistently used by the subject. The results also demonstrate that a high complexity of perception can be obtained, allowing the subject to identify the stiffness and shape of three different objects by exploiting different characteristics of the elicited sensations. This approach could improve the efficacy and “life-like” quality of hand prostheses, resulting in a keystone strategy for the near-natural replacement of missing hands.
Two studies examined whether photographing objects impacts what is remembered about them. Participants were led on a guided tour of an art museum and were directed to observe some objects and to photograph others. Results showed a photo-taking-impairment effect: If participants took a photo of each object as a whole, they remembered fewer objects and remembered fewer details about the objects and the objects' locations in the museum than if they instead only observed the objects and did not photograph them. However, when participants zoomed in to photograph a specific part of the object, their subsequent recognition and detail memory was not impaired, and, in fact, memory for features that were not zoomed in on was just as strong as memory for features that were zoomed in on. This finding highlights key differences between people’s memory and the camera’s “memory” and suggests that the additional attentional and cognitive processes engaged by this focused activity can eliminate the photo-taking-impairment effect.