Social insects make elaborate use of simple mechanisms to achieve seemingly complex behavior and may thus provide a unique resource to discover the basic cognitive elements required for culture, i.e., group-specific behaviors that spread from “innovators” to others in the group via social learning. We first explored whether bumblebees can learn a nonnatural object manipulation task by using string pulling to access a reward that was presented out of reach. Only a small minority “innovated” and solved the task spontaneously, but most bees were able to learn to pull a string when trained in a stepwise manner. In addition, naïve bees learnt the task by observing a trained demonstrator from a distance. Learning the behavior relied on a combination of simple associative mechanisms and trial-and-error learning and did not require “insight”: naïve bees failed a “coiled-string experiment,” in which they did not receive instant visual feedback of the target moving closer when tugging on the string. In cultural diffusion experiments, the skill spread rapidly from a single knowledgeable individual to the majority of a colony’s foragers. We observed that there were several sequential sets (“generations”) of learners, so that previously naïve observers could first acquire the technique by interacting with skilled individuals and, subsequently, themselves become demonstrators for the next “generation” of learners, so that the longevity of the skill in the population could outlast the lives of informed foragers. This suggests that, so long as animals have a basic toolkit of associative and motor learning processes, the key ingredients for the cultural spread of unusual skills are already in place and do not require sophisticated cognition.
Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a “four-headed beast”-it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the “genomical” challenges of the next decade.
Domestic chickens are members of an order, Aves, which has been the focus of a revolution in our understanding of neuroanatomical, cognitive, and social complexity. At least some birds are now known to be on par with many mammals in terms of their level of intelligence, emotional sophistication, and social interaction. Yet, views of chickens have largely remained unrevised by this new evidence. In this paper, I examine the peer-reviewed scientific data on the leading edge of cognition, emotions, personality, and sociality in chickens, exploring such areas as self-awareness, cognitive bias, social learning and self-control, and comparing their abilities in these areas with other birds and other vertebrates, particularly mammals. My overall conclusion is that chickens are just as cognitively, emotionally and socially complex as most other birds and mammals in many areas, and that there is a need for further noninvasive comparative behavioral research with chickens as well as a re-framing of current views about their intelligence.
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 4 years ago
Despite the fact that midday naps are characteristic of early childhood, very little is understood about the structure and function of these sleep bouts. Given that sleep benefits memory in young adults, it is possible that naps serve a similar function for young children. However, children transition from biphasic to monophasic sleep patterns in early childhood, eliminating the nap from their daily sleep schedule. As such, naps may contain mostly light sleep stages and serve little function for learning and memory during this transitional age. Lacking scientific understanding of the function of naps in early childhood, policy makers may eliminate preschool classroom nap opportunities due to increasing curriculum demands. Here we show evidence that classroom naps support learning in preschool children by enhancing memories acquired earlier in the day compared with equivalent intervals spent awake. This nap benefit is greatest for children who nap habitually, regardless of age. Performance losses when nap-deprived are not recovered during subsequent overnight sleep. Physiological recordings of naps support a role of sleep spindles in memory performance. These results suggest that distributed sleep is critical in early learning; when short-term memory stores are limited, memory consolidation must take place frequently.
E-readers are fast rivaling print as a dominant method for reading. Because they offer accessibility options that are impossible in print, they are potentially beneficial for those with impairments, such as dyslexia. Yet, little is known about how the use of these devices influences reading in those who struggle. Here, we observe reading comprehension and speed in 103 high school students with dyslexia. Reading on paper was compared with reading on a small handheld e-reader device, formatted to display few words per line. We found that use of the device significantly improved speed and comprehension, when compared with traditional presentations on paper for specific subsets of these individuals: Those who struggled most with phoneme decoding or efficient sight word reading read more rapidly using the device, and those with limited VA Spans gained in comprehension. Prior eye tracking studies demonstrated that short lines facilitate reading in dyslexia, suggesting that it is the use of short lines (and not the device per se) that leads to the observed benefits. We propose that these findings may be understood as a consequence of visual attention deficits, in some with dyslexia, that make it difficult to allocate attention to uncrowded text near fixation, as the gaze advances during reading. Short lines ameliorate this by guiding attention to the uncrowded span.
There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.
Debates on the origin and function of music have a long history. While some scientists argue that music itself plays no adaptive role in human evolution, others suggest that music clearly has an evolutionary role, and point to music’s universality. A recent hypothesis suggested that a fundamental function of music has been to help mitigating cognitive dissonance, which is a discomfort caused by holding conflicting cognitions simultaneously. It usually leads to devaluation of conflicting knowledge. Here we provide experimental confirmation of this hypothesis using a classical paradigm known to create cognitive dissonance. Results of our experiment reveal that the exposure to Mozart’s music exerted a strongly positive influence upon the performance of young children and served as basis by which they were enabled to reconcile the cognitive dissonance.
Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented.
Several observations suggest that overlearned ordinal categories (e.g., letters, numbers, weekdays, months) are processed differently than non-ordinal categories in the brain. In synesthesia, for example, anomalous perceptual experiences are most often triggered by members of ordinal categories (Rich et al., 2005; Eagleman, 2009). In semantic dementia (SD), the processing of ordinal stimuli appears to be preserved relative to non-ordinal ones (Cappelletti et al., 2001). Moreover, ordinal stimuli often map onto unconscious spatial representations, as observed in the SNARC effect (Dehaene et al., 1993; Fias, 1996). At present, little is known about the neural representation of ordinal categories. Using functional neuroimaging, we show that words in ordinal categories are processed in a fronto-temporo-parietal network biased toward the right hemisphere. This differs from words in non-ordinal categories (such as names of furniture, animals, cars, and fruit), which show an expected bias toward the left hemisphere. Further, we find that increased predictability of stimulus order correlates with smaller regions of BOLD activation, a phenomenon we term prediction suppression. Our results provide new insights into the processing of ordinal stimuli, and suggest a new anatomical framework for understanding the patterns seen in synesthesia, unconscious spatial representation, and SD.