Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students' capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers' tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.
Analysis of Closed Soft Tissue Subcutaneous Injuries-“Impact Décollement” in Fatal Free Falls From Height-Forensic Aspect
- The American journal of forensic medicine and pathology
- Published about 3 years ago
The aim of this study was to assess the frequency of “décollement,” traumatic lesions of subcutaneous soft tissue, among victims fatally injured because of falls from different heights. Three hundred seventy-five cases of fatalities due to injuries acquired when falling from various heights onto a solid, flat surface, in which the complete forensic autopsy was performed, were analyzed. Décollement was noted in 125 (33%) of the cases. Comparative analysis of groups with and without décollement and observed factors has shown that the height of fall and the manner of death have statistically significant influence on décollement appearance. With regard to suicidal, accidental, or undefined origin of death décollement is statistically more common in accidental deaths. Décollement provides important clues for forensic reconstruction and could be a significant indicator of the body’s position at primary impact and the height from which the victim has either jumped or fallen.
There is a large body of research on utilizing online activity as a survey of political opinion to predict real world election outcomes. There is considerably less work, however, on using this data to understand topic-specific interest and opinion amongst the general population and specific demographic subgroups, as currently measured by relatively expensive surveys. Here we investigate this possibility by studying a full census of all Twitter activity during the 2012 election cycle along with the comprehensive search history of a large panel of Internet users during the same period, highlighting the challenges in interpreting online and social media activity as the results of a survey. As noted in existing work, the online population is a non-representative sample of the offline world (e.g., the U.S. voting population). We extend this work to show how demographic skew and user participation is non-stationary and difficult to predict over time. In addition, the nature of user contributions varies substantially around important events. Furthermore, we note subtle problems in mapping what people are sharing or consuming online to specific sentiment or opinion measures around a particular topic. We provide a framework, built around considering this data as an imperfect continuous panel survey, for addressing these issues so that meaningful insight about public interest and opinion can be reliably extracted from online and social media data.
Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a “true” AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual’s general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the phenomenon of AP are also discussed.
We present the first demonstration of the recording of optically encoded audio onto a plasmonic nanostructure. Analogous to the “optical sound” approach used in the early twentieth century to store sound on photographic film, we show that arrays of gold, pillar-supported bowtie nanoantennas could be used in a similar fashion to store sound information that is transferred via an amplitude modulated optical signal to the near field of an optical microscope. Retrieval of the audio information is achieved using standard imaging optics. We demonstrate that the sound information can be stored either as time-varying waveforms or in the frequency domain as the corresponding amplitude and phase spectra. A “plasmonic musical keyboard” comprising of 8 basic musical notes is constructed and used to play a short song. For comparison, we employ the correlation coefficient, which reveals that original and retrieved sound files are similar with maximum and minimum values of 0.995 and 0.342, respectively. We also show that the pBNAs could be used for basic signal processing by ablating unwanted frequency components on the nanostructure thereby enabling physical notch filtering of these components. Our work introduces a new application domain for plasmonic nanoantennas and experimentally verifies their potential for information processing.
People intuitively match basic tastes to sounds of different pitches, and the matches that they make tend to be consistent across individuals. It is, though, not altogether clear what governs such crossmodal mappings between taste and auditory pitch. Here, we assess whether variations in taste intensity influence the matching of taste to pitch as well as the role of emotion in mediating such crossmodal correspondences. Participants were presented with 5 basic tastants at 3 concentrations. In Experiment 1, the participants rated the tastants in terms of their emotional arousal and valence/pleasantness, and selected a musical note (from 19 possible pitches ranging from C2 to C8) and loudness that best matched each tastant. In Experiment 2, the participants made emotion ratings and note matches in separate blocks of trials, then made emotion ratings for all 19 notes. Overall, the results of the 2 experiments revealed that both taste quality and concentration exerted a significant effect on participants' loudness selection, taste intensity rating, and valence and arousal ratings. Taste quality, not concentration levels, had a significant effect on participants' choice of pitch, but a significant positive correlation was observed between individual perceived taste intensity and pitch choice. A significant and strong correlation was also demonstrated between participants' valence assessments of tastants and their valence assessments of the best-matching musical notes. These results therefore provide evidence that: 1) pitch-taste correspondences are primarily influenced by taste quality, and to a lesser extent, by perceived intensity; and 2) such correspondences may be mediated by valence/pleasantness.
Music is a form of art interweaving people of all walks of life. Through subtle changes in frequencies, a succession of musical notes forms a melody which is capable of mesmerizing the minds of people. With the advances in technology, we are now able to generate music electronically without relying solely on physical instruments. Here, we demonstrate a musical interpretation of droplet-based microfluidics as a form of novel electronic musical instruments. Using the interplay of electric field and hydrodynamics in microfluidic devices, well controlled frequency patterns corresponding to musical tracks are generated in real time. This high-speed modulation of droplet frequency (and therefore of droplet sizes) may also provide solutions that reconciles high-throughput droplet production and the control of individual droplet at production which is needed for many biochemical or material synthesis applications.
Part of musical understanding and enjoyment stems from the ability to accurately predict what note (or one of a small set of notes) is likely to follow after hearing the first part of a melody. Selective violation of expectations can add to aesthetic response but radical or frequent violations are likely to be disliked or not comprehended. In this study we investigated whether a lifetime of exposure to music among untrained older adults would enhance their reaction to unexpected endings of unfamiliar melodies. Older and younger adults listened to melodies that had expected or unexpected ending notes, according to Western music theory. Ratings of goodness-of-fit were similar in the groups, as was ERP response to the note onset (N1). However, in later time windows (P200 and Late Positive Component), the amplitude of a response to unexpected and expected endings was both larger in older adults, corresponding to greater sensitivity, and more widespread in locus, consistent with a dedifferentiation pattern. Lateralization patterns also differed. We conclude that older adults refine their understanding of this important aspect of music throughout life, with the ability supported by changing patterns of neural activity.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Immediate and lasting effects of music or second-language training were examined in early childhood using event-related potentials. Event-related potentials were recorded for French vowels and musical notes in a passive oddball paradigm in thirty-six 4- to 6-year-old children who received either French or music training. Following training, both groups showed enhanced late discriminative negativity (LDN) in their trained condition (music group-musical notes; French group-French vowels) and reduced LDN in the untrained condition. These changes reflect improved processing of relevant (trained) sounds, and an increased capacity to suppress irrelevant (untrained) sounds. After 1 year, training-induced brain changes persisted and new hemispheric changes appeared. Such results provide evidence for the lasting benefit of early intervention in young children.