Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP) generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response. These findings show large differences in information coding between the populations, and apparent functional advantages of asynchronous phasic firing.
BACKGROUND: Eritrean gross national income of Int$610 per capita is lower than the average for Africa (Int$1620) and considerably lower than the global average (Int$6977). It is therefore imperative that the country’s resources, including those specifically allocated to the health sector, are put to optimal use. The objectives of this study were (a) to estimate the relative technical and scale efficiency of public secondary level community hospitals in Eritrea, based on data generated in 2007, (b) to estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient, and © to estimate using Tobit regression analysis the impact of institutional and contextual/environmental variables on hospital inefficiencies. METHODS: A two-stage Data Envelopment Analysis (DEA) method is used to estimate efficiency of hospitals and to explain the inefficiencies. In the first stage, the efficient frontier and the hospital-level efficiency scores are first estimated using DEA. In the second stage, the estimated DEA efficiency scores are regressed on some institutional and contextual/environmental variables using a Tobit model. In 2007 there were a total of 20 secondary public community hospitals in Eritrea, nineteen of which generated data that could be included in the study. The input and output data were obtained from the Ministry of Health (MOH) annual health service activity report of 2007. Since our study employs data that are five years old, the results are not meant to uncritically inform current decision-making processes, but rather to illustrate the potential value of such efficiency analyses. RESULTS: The key findings were as follows: (i) the average constant returns to scale technical efficiency score was 90.3%; (ii) the average variable returns to scale technical efficiency score was 96.9%; and (iii) the average scale efficiency score was 93.3%. In 2007, the inefficient hospitals could have become more efficient by either increasing their outputs by 20,611 outpatient visits and 1,806 hospital discharges, or by transferring the excess 2.478 doctors (2.85%), 9.914 nurses and midwives (0.98%), 9.774 laboratory technicians (9.68%), and 195 beds (10.42%) to primary care facilities such as health centres, health stations, and maternal and child health clinics. In the Tobit regression analysis, the coefficient for OPDIPD (outpatient visits as a proportion of inpatient days) had a negative sign, and was statistically significant; and the coefficient for ALOS (average length of stay) had a positive sign, and was statistically significant at 5% level of significance. CONCLUSIONS: The findings from the first-stage analysis imply that 68% hospitals were variable returns to scale technically efficient; and only 42% hospitals achieved scale efficiency. On average, inefficient hospitals could have increased their outpatient visits by 5.05% and hospital discharges by 3.42% using the same resources. Our second-stage analysis shows that the ratio of outpatient visits to inpatient days and average length of inpatient stay are significantly correlated with hospital inefficiencies. This study shows that routinely collected hospital data in Eritrea can be used to identify relatively inefficient hospitals as well as the sources of their inefficiencies.
The process of documentation in electronic health records (EHRs) is known to be time consuming, inefficient, and cumbersome. The use of dictation coupled with manual transcription has become an increasingly common practice. In recent years, natural language processing (NLP)-enabled data capture has become a viable alternative for data entry. It enables the clinician to maintain control of the process and potentially reduce the documentation burden. The question remains how this NLP-enabled workflow will impact EHR usability and whether it can meet the structured data and other EHR requirements while enhancing the user’s experience.
Growing evidence suggests that anthropogenic litter, particularly plastic, represents a highly pervasive and persistent threat to global marine ecosystems. Multinational research is progressing to characterise its sources, distribution and abundance so that interventions aimed at reducing future inputs and clearing extant litter can be developed. Citizen science projects, whereby members of the public gather information, offer a low-cost method of collecting large volumes of data with considerable temporal and spatial coverage. Furthermore, such projects raise awareness of environmental issues and can lead to positive changes in behaviours and attitudes. We present data collected over a decade (2005-2014 inclusive) by Marine Conservation Society (MCS) volunteers during beach litter surveys carried along the British coastline, with the aim of increasing knowledge on the composition, spatial distribution and temporal trends of coastal debris. Unlike many citizen science projects, the MCS beach litter survey programme gathers information on the number of volunteers, duration of surveys and distances covered. This comprehensive information provides an opportunity to standardise data for variation in sampling effort among surveys, enhancing the value of outputs and robustness of findings. We found that plastic is the main constituent of anthropogenic litter on British beaches and the majority of traceable items originate from land-based sources, such as public littering. We identify the coast of the Western English Channel and Celtic Sea as experiencing the highest relative litter levels. Increasing trends over the 10-year time period were detected for a number of individual item categories, yet no statistically significant change in total (effort-corrected) litter was detected. We discuss the limitations of the dataset and make recommendations for future work. The study demonstrates the value of citizen science data in providing insights that would otherwise not be possible due to logistical and financial constraints of running government-funded sampling programmes on such large scales.
Recent advances in deep learning and specifically in generative adversarial networks have demonstrated surprising results in generating new images and videos upon request even using natural language as input. In this paper we present the first application of generative adversarial autoencoders (AAE) for generating novel molecular fingerprints with a defined set of parameters. We developed a 7-layer AAE architecture with the latent middle layer serving as a discriminator. As an input and output the AAE uses a vector of binary fingerprints and concentration of the molecule. In the latent layer we also introduced a neuron responsible for growth inhibition percentage, which when negative indicates the reduction in the number of tumor cells after the treatment. To train the AAE we used the NCI-60 cell line assay data for 6252 compounds profiled on MCF-7 cell line. The output of the AAE was used to screen 72 million compounds in PubChem and select candidate molecules with potential anti-cancer properties. This approach is a proof of concept of an artificially-intelligent drug discovery engine, where AAEs are used to generate new molecular fingerprints with the desired molecular properties.
We propose and develop a Lexicocalorimeter: an online, interactive instrument for measuring the “caloric content” of social media and other large-scale texts. We do so by constructing extensive yet improvable tables of food and activity related phrases, and respectively assigning them with sourced estimates of caloric intake and expenditure. We show that for Twitter, our naive measures of “caloric input”, “caloric output”, and the ratio of these measures are all strong correlates with health and well-being measures for the contiguous United States. Our caloric balance measure in many cases outperforms both its constituent quantities; is tunable to specific health and well-being measures such as diabetes rates; has the capability of providing a real-time signal reflecting a population’s health; and has the potential to be used alongside traditional survey data in the development of public policy and collective self-awareness. Because our Lexicocalorimeter is a linear superposition of principled phrase scores, we also show we can move beyond correlations to explore what people talk about in collective detail, and assist in the understanding and explanation of how population-scale conditions vary, a capacity unavailable to black-box type methods.
To successfully guide limb movements, the brain takes in sensory information about the limb, internally tracks the state of the limb, and produces appropriate motor commands. It is widely believed that this process uses an internal model, which describes our prior beliefs about how the limb responds to motor commands. Here, we leveraged a brain-machine interface (BMI) paradigm in rhesus monkeys and novel statistical analyses of neural population activity to gain insight into moment-by-moment internal model computations. We discovered that a mismatch between subjects' internal models and the actual BMI explains roughly 65% of movement errors, as well as long-standing deficiencies in BMI speed control. We then used the internal models to characterize how the neural population activity changes during BMI learning. More broadly, this work provides an approach for interpreting neural population activity in the context of how prior beliefs guide the transformation of sensory input to motor output.
There are many challenges to measuring power input and force output from a flapping vertebrate. Animals can vary a multitude of kinematic parameters simultaneously, and methods for measuring power and force are either not possible in a flying vertebrate or are very time and equipment intensive. To circumvent these challenges, we constructed a robotic, multi-articulated bat wing that allows us to measure power input and force output simultaneously, across a range of kinematic parameters. The robot is modeled after the lesser dog-faced fruit bat, Cynopterus brachyotis, and contains seven joints powered by three servo motors. Collectively, this joint and motor arrangement allows the robot to vary wingbeat frequency, wingbeat amplitude, stroke plane, downstroke ratio, and wing folding. We describe the design, construction, programing, instrumentation, characterization, and analysis of the robot. We show that the kinematics, inputs, and outputs demonstrate good repeatability both within and among trials. Finally, we describe lessons about the structure of living bats learned from trying to mimic their flight in a robotic wing.
In this paper we present the first comprehensive bibliometric analysis of eleven open-access mega-journals (OAMJs). OAMJs are a relatively recent phenomenon, and have been characterised as having four key characteristics: large size; broad disciplinary scope; a Gold-OA business model; and a peer-review policy that seeks to determine only the scientific soundness of the research rather than evaluate the novelty or significance of the work. Our investigation focuses on four key modes of analysis: journal outputs (the number of articles published and changes in output over time); OAMJ author characteristics (nationalities and institutional affiliations); subject areas (the disciplinary scope of OAMJs, and variations in sub-disciplinary output); and citation profiles (the citation distributions of each OAMJ, and the impact of citing journals). We found that while the total output of the eleven mega-journals grew by 14.9% between 2014 and 2015, this growth is largely attributable to the increased output of Scientific Reports and Medicine. We also found substantial variation in the geographical distribution of authors. Several journals have a relatively high proportion of Chinese authors, and we suggest this may be linked to these journals' high Journal Impact Factors (JIFs). The mega-journals were also found to vary in subject scope, with several journals publishing disproportionately high numbers of articles in certain sub-disciplines. Our citation analsysis offers support for Björk & Catani’s suggestion that OAMJs’s citation distributions can be similar to those of traditional journals, while noting considerable variation in citation rates across the eleven titles. We conclude that while the OAMJ term is useful as a means of grouping journals which share a set of key characteristics, there is no such thing as a “typical” mega-journal, and we suggest several areas for additional research that might help us better understand the current and future role of OAMJs in scholarly communication.
Despite recent efforts to enforce policies requiring the sharing of data underlying clinical findings, current policies of biomedical journals remain largely heterogeneous. As this heterogeneity does not optimally serve the cause of data sharing, a first step towards better harmonization would be the requirement of a data sharing statement for all clinical studies and not simply for randomized studies. Although the publication of a data sharing statement does not imply that all data is made readily available, such a policy would swiftly implement a cultural change in the definition of scientific outputs. Currently, a scientific output only corresponds to a study report published in a medical journal, while in the near future it might consist of all materials described in the manuscript, including all relevant raw data. When such a cultural shift has been achieved, the logical conclusion would be for biomedical journals to require authors to make all data fully available without restriction as a condition for publication.