Kawaii (a Japanese word meaning “cute”) things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE = 43.9±10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9±5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7±2.2% improvement) than after viewing less cute images (1.4±2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2±2.1%). In the third experiment, participants performed a global-local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 5 years ago
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.
Tens of millions of people are currently choosing health coverage on a state or federal health insurance exchange as part of the Patient Protection and Affordable Care Act. We examine how well people make these choices, how well they think they do, and what can be done to improve these choices. We conducted 6 experiments asking people to choose the most cost-effective policy using websites modeled on current exchanges. Our results suggest there is significant room for improvement. Without interventions, respondents perform at near chance levels and show a significant bias, overweighting out-of-pocket expenses and deductibles. Financial incentives do not improve performance, and decision-makers do not realize that they are performing poorly. However, performance can be improved quite markedly by providing calculation aids, and by choosing a “smart” default. Implementing these psychologically based principles could save purchasers of policies and taxpayers approximately 10 billion dollars every year.
Routine antimicrobial susceptibility testing (AST) can prevent deaths due to bacteria and reduce the spread of multi-drug-resistance, but cannot be regularly performed in resource-limited-settings due to technological challenges, high-costs, and lack of trained professionals. We demonstrate an automated and cost-effective cellphone-based 96-well microtiter-plate (MTP) reader, capable of performing AST without the need for trained diagnosticians. Our system includes a 3D-printed smartphone attachment that holds and illuminates the MTP using a light-emitting-diode array. An inexpensive optical fiber-array enables the capture of the transmitted light of each well through the smartphone camera. A custom-designed application sends the captured image to a server to automatically determine well-turbidity, with results returned to the smartphone in ~1 minute. We tested this mobile-reader using MTPs prepared with 17 antibiotics targeting Gram-negative bacteria on clinical isolates of Klebsiella pneumoniae, containing highly-resistant antimicrobial profiles. Using 78 patient isolate test-plates, we demonstrated that our mobile-reader meets the FDA-defined AST criteria, with a well-turbidity detection accuracy of 98.21%, minimum-inhibitory-concentration accuracy of 95.12%, and a drug-susceptibility interpretation accuracy of 99.23%, with no very major errors. This mobile-reader could eliminate the need for trained diagnosticians to perform AST, reduce the cost-barrier for routine testing, and assist in spatio-temporal tracking of bacterial resistance.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 2 years ago
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Adequate normalization minimizes the effects of systematic technical variations and is a prerequisite for getting meaningful biological changes. However, there is inconsistency about miRNA normalization performances and recommendations. Thus, we investigated the impact of seven different normalization methods (reference gene index, global geometric mean, quantile, invariant selection, loess, loessM, and generalized procrustes analysis) on intra- and inter-platform performance of two distinct and commonly used miRNA profiling platforms.
Recent clinical studies revealed emotional and cognitive impairments associated with absence epilepsy. Preclinical research with genetic models of absence epilepsy however have primarily focused on dysfunctional emotional processes and paid relatively less attention to cognitive impairment. In order to bridge this gap, we investigated age-dependent changes in learning and memory performance, anxiety-like behavior, and locomotor activity of WAG/Rij rats (a valid model of generalized absence epilepsy) using passive avoidance, Morris water maze, elevated plus maze, and locomotor activity cage. We tested 5 month-old and 13 month-old WAG/Rij rats and compared their performance to age-matched Wistar rats. Results revealed a decline in emotional and spatial memory of WAG/Rij rats compared to age-matched Wistar rats only at 13 months of age. Importantly, there were no significant differences between WAG/Rij and Wistar rats in terms of anxiety-like behavior and locomotor activity at either age. Results pointed at age-dependent learning and memory deficits in the WAG/Rij rat model of absence epilepsy.
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences.
Automatic monitoring of Adverse Drug Reactions (ADRs), defined as adverse patient outcomes caused by medications, is a challenging research problem that is currently receiving significant attention from the medical informatics community. In recent years, user-posted data on social media, primarily due to its sheer volume, has become a useful resource for ADR monitoring. Research using social media data has progressed using various data sources and techniques, making it difficult to compare distinct systems and their performances. In this paper, we perform a methodical review to characterize the different approaches to ADR detection/extraction from social media, and their applicability to pharmacovigilance. In addition, we present a potential systematic pathway to ADR monitoring from social media.