Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS). Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure. Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)-two of them in permanent CLIS and two entering the CLIS without reliable means of communication-learned to answer personal questions with known answers and open questions all requiring a “yes” or “no” thought using frontocentral oxygenation changes measured with fNIRS. Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions. Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%. Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a “yes” or “no” response. However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 2 years ago
Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.
Do highly productive researchers have significantly higher probability to produce top cited papers? Or do high productive researchers mainly produce a sea of irrelevant papers-in other words do we find a diminishing marginal result from productivity? The answer on these questions is important, as it may help to answer the question of whether the increased competition and increased use of indicators for research evaluation and accountability focus has perverse effects or not. We use a Swedish author disambiguated dataset consisting of 48.000 researchers and their WoS-publications during the period of 2008-2011 with citations until 2014 to investigate the relation between productivity and production of highly cited papers. As the analysis shows, quantity does make a difference.
When the director-general of the World Health Organization (WHO) declared that the recently reported clusters of microcephaly and other neurologic disorders represent a Public Health Emergency of International Concern (PHEIC), she called for increased research into their cause, including the question of whether the Zika virus is the source of the problem.(1) The declaration provides an opportunity to step up the pace of research in order to find the answer to some important questions more quickly. It could not only facilitate the accumulation of knowledge about the relationship between the Zika virus and microcephaly, but also accelerate the study of . . .
To estimate the proportion of older adults in the emergency department (ED) who are willing and able to use a tablet computer to answer questions.
Fecundity, the biologic capacity to reproduce, is essential for the health of individuals and is, therefore, fundamental for understanding human health at the population level. Given the absence of a population (bio)marker, fecundity is assessed indirectly by various individual-based (e.g. semen quality, ovulation) or couple-based (e.g. time-to-pregnancy) endpoints. Population monitoring of fecundity is challenging, and often defaults to relying on rates of births (fertility) or adverse outcomes such as genitourinary malformations and reproductive site cancers. In light of reported declines in semen quality and fertility rates in some global regions among other changes, the question as to whether human fecundity is changing needs investigation. We review existing data and novel methodological approaches aimed at answering this question from a transdisciplinary perspective. The existing literature is insufficient for answering this question; we provide an overview of currently available resources and novel methods suitable for delineating temporal patterns in human fecundity in future research.
The ways in which people learn, remember, and solve problems have all been impacted by the Internet. The present research explored how people become primed to use the Internet as a form of cognitive offloading. In three experiments, we show that using the Internet to retrieve information alters a person’s propensity to use the Internet to retrieve other information. Specifically, participants who used Google to answer an initial set of difficult trivia questions were more likely to decide to use Google when answering a new set of relatively easy trivia questions than were participants who answered the initial questions from memory. These results suggest that relying on the Internet to access information makes one more likely to rely on the Internet to access other information.
Recent technological advances have given rise to an information-gathering tool unparalleled by any in human history-the Internet. Understanding how access to such a powerful informational tool influences how we think represents an important question for psychological science. In the present investigation we examined the impact of access to the Internet on the metacognitive processes that govern our decisions about what we “know” and “don’t know.” Results demonstrated that access to the Internet influenced individuals' willingness to volunteer answers, which led to fewer correct answers overall but greater accuracy when an answer was offered. Critically, access to the Internet also influenced feeling-of-knowing, and this accounted for some (but not all) of the effect on willingness to volunteer answers. These findings demonstrate that access to the Internet can influence metacognitive processes, and contribute novel insights into the operation of the transactive memory system formed by people and the Internet.
Although older adults rarely outperform young adults on learning tasks, in the study reported here they surpassed their younger counterparts not only by answering more semantic-memory general-information questions correctly, but also by better correcting their mistakes. While both young and older adults exhibited a hypercorrection effect, correcting their high-confidence errors more than their low-confidence errors, the effect was larger for young adults. Whereas older adults corrected high-confidence errors to the same extent as did young adults, they outdid the young in also correcting their low-confidence errors. Their event-related potentials point to an attentional explanation: Both groups showed a strong attention-related P3a in conjunction with high-confidence-error feedback, but the older adults also showed strong P3as to low-confidence-error feedback. Indeed, the older adults were able to rally their attentional resources to learn the true answers regardless of their original confidence in the errors and regardless of their familiarity with the answers.
Crowds can often make better decisions than individuals or small groups of experts by leveraging their ability to aggregate diverse information. Question answering sites, such as Stack Exchange, rely on the “wisdom of crowds” effect to identify the best answers to questions asked by users. We analyze data from 250 communities on the Stack Exchange network to pinpoint factors affecting which answers are chosen as the best answers. Our results suggest that, rather than evaluate all available answers to a question, users rely on simple cognitive heuristics to choose an answer to vote for or accept. These cognitive heuristics are linked to an answer’s salience, such as the order in which it is listed and how much screen space it occupies. While askers appear to depend on heuristics to a greater extent than voters when choosing an answer to accept as the most helpful one, voters use acceptance itself as a heuristic, and they are more likely to choose the answer after it has been accepted than before that answer was accepted. These heuristics become more important in explaining and predicting behavior as the number of available answers to a question increases. Our findings suggest that crowd judgments may become less reliable as the number of answers grows.