Concept: Massachusetts Institute of Technology
- Cambridge quarterly of healthcare ethics : CQ : the international journal of healthcare ethics committees
- Published about 5 years ago
When I was invited some months ago by Tomi Kushner to contribute my memories on how some of us early birds came into bioethics, I hesitated. At the end of 2013 I published a memoir of my life in bioethics over nearly 50 years: In Search of the Good: A Life in Bioethics (MIT Press). I felt I had said all I wanted to say in that book about my life and couldn’t imagine doing it again, even in a short article. I was terminally bored with the topic-me.
This paper presents a method for detecting psychological stress levels. It aims to explore the feasibility of using a single physiological signal to create a more practical alternative for detecting stress in people than current multiple physiological signals approaches involve. In particular, the approach explored uses linear discriminant analysis (LDA) based on the electrodermal activity (EDA) signal which aims at discriminating between three stress levels: low, medium and high. We used the MIT Media lab ‘stress database’ from which we selected eleven ‘foot’ based EDA data sets for our experiments. Using this eighteen EDA features were extracted from (sixty-six) five-minutes data segments equating to three driving conditions: at rest, on the open road (highway) and city driving. After that, Fisher projection and Linear discriminant analysis (LDA) were used to classify the stress levels with feature vectors, that included both leaving one out and test cross-validation strategy. The results showed that these methods achieved recognition rate of 81.82% which we argue, while less that multiple signal systems, may be a better balance between recognition performance and computational load, that could be a promising line of research for the development of practical personal stress monitors.
mosdepth is available from https://github.com/brentp/mosdepth under the MIT license.
This study was intended to determine if previously identified educational benefits of the Harvard Medical School (HMS) Cambridge Integrated Clerkship (CIC) endure over time.
The phenomenon recently described as “hidden hearing loss” was the subject of a meeting co-hosted by the Department of Defense Hearing Center of Excellence and MIT Lincoln Laboratory to consider the potential relevance of noise-related synaptopathic injury to military settings and performance, service-related injury scenarios, and military medical priorities. Participants included approximately 50 researchers and subject matter experts from academic, federal, and military laboratories. Here we present a synthesis of discussion topics and concerns, as well as specific research objectives identified to develop militarily relevant knowledge.
The year, 2016, marked the 75th anniversary of Dr. Saul Hertz first using radioiodine to treat a patient with thyroid disease. In November of 1936, a luncheon was held of the faculty of Harvard Medical School where Karl Compton, PhD, president of the Massachusetts Institute of Technology was invited to give a presentation entitled “What Physics Can Do for Biology and Medicine.” Saul Hertz who attended the luncheon spontaneously asked the very pertinent question that perhaps changed the course of treatment of thyroid disease, “Could iodine be made radioactive artificially?” We review the events leading up to the asking of this question, the preclinical investigations by Dr. Hertz and his colleague Arthur Roberts prior to the treatment of the first patient and what occurred in the years following this landmark event. This commentary seeks to set the record straight to the sequence of events leading to the first radioiodine therapy, so that those involved can be recognized with due credit.
Noise exposure and the subsequent hearing loss are well documented aspects of military life. Numerous studies have indicated high rates of noise-induced hearing injury (NIHI) in active-duty service men and women, and recent statistics from the US Department of Veterans Affairs indicate a population of veterans with hearing loss that is growing at an increasing rate. In an effort to minimize hearing loss, the US Department of Defense (DoD) updated its Hearing Conservation Program in 2010, and also has recently revised the DoD Design Criteria Standard Noise Limits (MIL-STD-1474E) which defines allowable noise levels in the design of all military acquisitions including weapons and vehicles. Even with such mandates, it remains a challenge to accurately quantify the noise exposure experienced by a Warfighter over the course of a mission or training exercise, or even in a standard work day. Noise dosimeters are intended for exactly this purpose, but variations in device placement (e.g., free-field, on-body, in/near-ear), hardware (e.g., microphone, analog-to-digital converter), measurement time (e.g., work day, 24-h), and dose metric calculations (e.g., time-weighted energy, peak levels, Auditory Risk Units), as well as noise types (e.g., continuous, intermittent, impulsive) can cause exposure measurements to be incomplete, inaccurate, or inappropriate for a given situation. This paper describes the design of a noise dosimeter capable of acquiring exposure data across tactical environments. Two generations of prototypes have been built at MIT Lincoln Laboratory with funding from the US Army, Navy, and Marine Corps. Details related to hardware, signal processing, and testing efforts are provided, along with example tactical military noise data and lessons learned from early fieldings. Finally, we discuss the continued need to prioritize personalized dosimetry in order to improve models that predict or characterize the risk of auditory damage, to integrate dosimeters with hearing-protection devices, and to inform strategies and metrics for reducing NIHI.
The article considers the history of electroshock therapy as a history of medical technology, professional cooperation and business competition. A variation of a history from below is intended; though not from the patients' perspective (Porter, Theory Soc 14:175-198, 1985), but with a focus on electrodes, circuitry and patents. Such a ‘material history’ of electroshock therapy reveals that the technical make-up of electroshock devices and what they were used for was relative to the changing interests of physicians, industrial companies and mental health politics; it makes an intriguing case for the Social Construction of Technology theory (Bijker et al., The social construction of technological systems: new directions in the sociology and history of technology. MIT Press, Cambridge, MA, 1987).
Tall buildings are ubiquitous in major cities and house the homes and workplaces of many individuals. However, relatively few studies have been carried out to study the dynamic characteristics of tall buildings based on field measurements. In this paper, the dynamic behavior of the Green Building, a unique 21-story tall structure located on the campus of the Massachusetts Institute of Technology (MIT, Cambridge, MA, USA), was characterized and modeled as a simplified lumped-mass beam model (SLMM), using data from a network of accelerometers. The accelerometer network was used to record structural responses due to ambient vibrations, blast loading, and the October 16th 2012 earthquake near Hollis Center (ME, USA). Spectral and signal coherence analysis of the collected data was used to identify natural frequencies, modes, foundation rocking behavior, and structural asymmetries. A relation between foundation rocking and structural natural frequencies was also found. Natural frequencies and structural acceleration from the field measurements were compared with those predicted by the SLMM which was updated by inverse solving based on advanced multiobjective optimization methods using the measured structural responses and found to have good agreement.
Fictive motion (FM) characterizes the use of dynamic expressions to describe static scenes. This phenomenon is crucial in terms of cognitive motivations for language use; several explanations have been proposed to account for it, among which mental simulation (Talmy in Toward a cognitive semantics, vol 1. MIT Press, Cambridge, 2000) and visual scanning (Matlock in Studies in linguistic motivation. Mouton de Gruyter, Berlin and New York, pp 221-248, 2004a). The aims of this paper were to test these competing explanations and identify language-specific constraints. To do this, we compared the linguistic strategies for expressing several types of static configurations in four languages, French, Italian, German and Serbian, with an experimental set-up (59 participants). The experiment yielded significant differences for motion-affordance versus no motion-affordance, for all four languages. Significant differences between languages included mean frequency of FM expressions. In order to refine the picture, and more specifically to disentangle the respective roles of language-specific conventions and language-independent (i.e. possibly cognitive) motivations, we completed our study with a corpus approach (besides the four initial languages, we added English and Polish). The corpus study showed low frequency of FM across languages, but a higher frequency and translation ratio for some FM types-among which those best accounted for by enactive perception. The importance of enactive perception could thus explain both the universality of FM and the fact that language-specific conventions appear mainly in very specific contexts-the ones furthest from enaction.