This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.
Android robots are entering human social life. However, human-robot interactions may be complicated by a hypothetical Uncanny Valley (UV) in which imperfect human-likeness provokes dislike. Previous investigations using unnaturally blended images reported inconsistent UV effects. We demonstrate an UV in subjects' explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces. An “investment game” showed that the UV penetrated even more deeply to influence subjects' implicit decisions concerning robots' social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans. Preliminary evidence suggests category confusion may occur in the UV but does not mediate the likability effect. These findings suggest that while classic elements of human social psychology govern human-robot social interaction, robust UV effects pose a formidable android-specific problem.
The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.
BACKGROUND: Smartphone use is growing exponentially and will soon become the only mobile phone handset for about 6 billion users. Smartphones are ideal marketing targets as consumers can be reached anytime, anywhere. Smartphone application (app) stores are global shops that sell apps to users all around the world. Although smartphone stores have a wide collection of health-related apps they also have a wide set of harmful apps. In this study, the availability of ‘pro-smoking’ apps in two of the largest smartphone app stores (Apple App store and Android Market) was examined. METHOD: In February 2012, we searched the Apple App Store and Android Market for pro-smoking apps, using the keywords Smoke, Cigarette, Cigar, Smoking and Tobacco. We excluded apps that were not tobacco-related and then assessed the tobacco-related apps against our inclusion criteria. RESULT: 107 pro-smoking apps were identified and classified into six categories based on functionality. 42 of these apps were from the Android Market and downloaded by over 6 million users. Some apps have explicit images of cigarette brands. CONCLUSIONS: Tobacco products are being promoted in the new ‘smartphone app’ medium which has global reach, a huge consumer base of various age groups and underdeveloped regulation. The paper also provides two examples of app store responses to country-specific laws and regulations that could be used to control the harmful contents in the app stores for individual countries.
An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android’s slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.
Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications.
New tools are needed to enable rapid detection, identification, and reporting of infectious viral and microbial pathogens in a wide variety of point-of-care applications that impact human and animal health. We report the design, construction, and characterization of a platform for multiplexed analysis of disease-specific DNA sequences that utilizes a smartphone camera as the sensor in conjunction with a handheld “cradle” that interfaces the phone with a silicon-based microfluidic chip embedded within a credit card-sized cartridge. Utilizing specific nucleic acid sequences for four equine respiratory pathogens as representative examples, we demonstrate the ability of the system to utilize a single 15 µL droplet of test sample to perform selective positive/negative determination of target sequences, including integrated experimental controls, in approximately 30 minutes. Our approach utilizes loop mediated isothermal amplification (LAMP) reagents pre-deposited into distinct lanes of the microfluidic chip, which, when exposed to target nucleic acid sequences from the test sample, generates fluorescent products that, when excited by appropriately selected light emitting diodes (LEDs) are visualized and automatically analyzed by a software application running on the smartphone microprocessor. The system achieves detection limits comparable to those obtained by laboratory-based methods and instruments. Assay information is combined with information from the cartridge and the patient to populate a cloud-based database for epidemiological reporting of test results.
Realtime cerebellum: a large-scale spiking network model of the cerebellum that runs in realtime using a graphics processing unit
- Neural networks : the official journal of the International Neural Network Society
- Published about 7 years ago
The cerebellum plays an essential role in adaptive motor control. Once we are able to build a cerebellar model that runs in realtime, which means that a computer simulation of 1 s in the simulated world completes within 1 s in the real world, the cerebellar model could be used as a realtime adaptive neural controller for physical hardware such as humanoid robots. In this paper, we introduce “Realtime Cerebellum (RC)”, a new implementation of our large-scale spiking network model of the cerebellum, which was originally built to study cerebellar mechanisms for simultaneous gain and timing control and acted as a general-purpose supervised learning machine of spatiotemporal information known as reservoir computing, on a graphics processing unit (GPU). Owing to the massive parallel computing capability of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation results of the Pavlovian delay eyeblink conditioning with the previous version. RC is adopted as a realtime adaptive controller of a humanoid robot, which is instructed to learn a proper timing to swing a bat to hit a flying ball online. These results suggest that RC provides a means to apply the computational power of the cerebellum as a versatile supervised learning machine towards engineering applications.
- The British journal of social psychology / the British Psychological Society
- Published over 8 years ago
Previous work on social categorization has shown that people often use cues such as a person’s gender, age, or ethnicity to categorize and form impressions of others. The present research investigated effects of social category membership on the evaluation of humanoid robots. More specifically, participants rated a humanoid robot that either belonged to their in-group or to a national out-group with regard to anthropomorphism (e.g., mind attribution, warmth), psychological closeness, contact intentions, and design. We predicted that participants would show an in-group bias towards the robot that ostensibly belonged to their in-group - as indicated by its name and location of production. In line with our hypotheses, participants not only rated the in-group robot more favourably - importantly, they also anthropomorphized it more strongly than the out-group robot. Our findings thus document that people even apply social categorization processes and subsequent differential social evaluations to robots.
Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.