Ultrafast video recording of spatiotemporal light distribution in a scattering medium has a significant impact in biomedicine. Although many simulation tools have been implemented to model light propagation in scattering media, existing experimental instruments still lack sufficient imaging speed to record transient light-scattering events in real time. We report single-shot ultrafast video recording of a light-induced photonic Mach cone propagating in an engineered scattering plate assembly. This dynamic light-scattering event was captured in a single camera exposure by lossless-encoding compressed ultrafast photography at 100 billion frames per second. Our experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation. This technology holds great promise for next-generation biomedical imaging instrumentation.
Myxococcus xanthus cells self-organize into periodic bands of traveling waves, termed ripples, during multicellular fruiting body development and predation on other bacteria. To investigate the mechanistic basis of rippling behavior and its physiological role during predation by this Gram-negative soil bacterium, we have used an approach that combines mathematical modeling with experimental observations. Specifically, we developed an agent-based model (ABM) to simulate rippling behavior that employs a new signaling mechanism to trigger cellular reversals. The ABM has demonstrated that three ingredients are sufficient to generate rippling behavior: (i) side-to-side signaling between two cells that causes one of the cells to reverse, (ii) a minimal refractory time period after each reversal during which cells cannot reverse again, and (iii) physical interactions that cause the cells to locally align. To explain why rippling behavior appears as a consequence of the presence of prey, we postulate that prey-associated macromolecules indirectly induce ripples by stimulating side-to-side contact-mediated signaling. In parallel to the simulations, M. xanthus predatory rippling behavior was experimentally observed and analyzed using time-lapse microscopy. A formalized relationship between the wavelength, reversal time, and cell velocity has been predicted by the simulations and confirmed by the experimental data. Furthermore, the results suggest that the physiological role of rippling behavior during M. xanthus predation is to increase the rate of spreading over prey cells due to increased side-to-side contact-mediated signaling and to allow predatory cells to remain on the prey longer as a result of more periodic cell motility.
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
BACKGROUND: In order to replicate within their cellular host, many viruses have developed self-assembly strategies for their capsids which are sufficiently robust as to be reconstituted in vitro. Mathematical models for virus self-assembly usually assume that the bonds leading to cluster formation have constant reactivity over the time course of assembly (direct assembly). In some cases, however, binding sites between the capsomers have been reported to be activated during the self-assembly process (hierarchical assembly). RESULTS: In order to study possible advantages of such hierarchical schemes for icosahedral virus capsid assembly, we use Brownian dynamics simulations of a patchy particle model that allows us to switch binding sites on and off during assembly. For T1 viruses, we implement a hierarchical assembly scheme where inter-capsomer bonds become active only if a complete pentamer has been assembled. We find direct assembly to be favorable for reversible bonds allowing for repeated structural reorganizations, while hierarchical assembly is favorable for strong bonds with small dissociation rate, as this situation is less prone to kinetic trapping. However, at the same time it is more vulnerable to monomer starvation during the final phase. Increasing the number of initial monomers does have only a weak effect on these general features. The differences between the two assembly schemes become more pronounced for more complex virus geometries, as shown here for T3 viruses, which assemble through homogeneous pentamers and heterogeneous hexamers in the hierarchical scheme. In order to complement the simulations for this more complicated case, we introduce a master equation approach that agrees well with the simulation results. CONCLUSIONS: Our analysis shows for which molecular parameters hierarchical assembly schemes can outperform direct ones. Hierarchical assembly is superior as it avoids kinetic trapping, but suffers more strongly from monomer starvation. These insights increase our physical understanding of an essential biological process, with many interesting potential applications in medicine and materials science.
Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards.
Power and performance management problem in large scale computing systems like data centers has attracted a lot of interests from both enterprises and academic researchers as power saving has become more and more important in many fields. Because of the multiple objectives, multiple influential factors and hierarchical structure in the system, the problem is indeed complex and hard. In this paper, the problem will be investigated in a virtualized computing system. Specifically, it is formulated as a power optimization problem with some constraints on performance. Then, the adaptive controller based on least-square self-tuning regulator(LS-STR) is designed to track performance in the first step; and the resource solved by the controller is allocated in order to minimize the power consumption as the second step. Some simulations are designed to test the effectiveness of this method and to compare it with some other controllers. The simulation results show that the adaptive controller is generally effective: it is applicable for different performance metrics, for different workloads, and for single and multiple workloads; it can track the performance requirement effectively and save the power consumption significantly.
BACKGROUND: During cellulosic ethanol production, cellulose hydrolysis is achieved by synergetic action of cellulase enzyme complex consisting of multiple enzymes with different mode of actions. Enzymatic hydrolysis of cellulose is one of the bottlenecks in the commercialization of the process due to low hydrolysis rates and high cost of enzymes. A robust hydrolysis model that can predict hydrolysis profile under various scenarios can act as an important forecasting tool to improve the hydrolysis process. However, multiple factors affecting hydrolysis: cellulose structure and complex enzyme-substrate interactions during hydrolysis make it diffucult to develop mathematical kinetic models that can simulate hydrolysis in presence of multiple enzymes with high fidelity. In this study, a comprehensive hydrolysis model based on stochastic molecular modeling approch in which each hydrolysis event is translated into a discrete event is presented. The model captures the structural features of cellulose, enzyme properties (mode of actions, synergism, inhibition), and most importantly dynamic morphological changes in the substrate that directly affect the enzyme-substrate interactions during hydrolysis. RESULTS: Cellulose was modeled as a group of microfibrils consisting of elementary fibrils bundles, where each elementary fibril was represented as a three dimensional matrix of glucose molecules. Hydrolysis of cellulose was simulated based on Monte Carlo simulation technique. Cellulose hydrolysis results predicted by model simulations agree well with the experimental data from literature. Coefficients of determination for model predictions and experimental values were in the range of 0.75 to 0.96 for Avicel hydrolysis by CBH I action. Model was able to simulate the synergistic action of multiple enzymes during hydrolysis. The model simulations captured the important experimental observations: effect of structural properties, enzyme inhibition and enzyme loadings on the hydrolysis and degree of synergism among enzymes. CONCLUSIONS: The model was effective in capturing the dynamic behavior of cellulose hydrolysis during action of individual as well as multiple cellulases. Simulations were in qualitative and quantitative agreement with experimental data. Several experimentally observed phenomena were simulated without the need for any additional assumptions or parameter changes and confirmed the validity of using the stochastic molecular modeling approach to quantitatively and qualitatively describe the cellulose hydrolysis.
The use of simulation training in postgraduate medical education is an area of rapidly growing popularity and research. This study was designed to assess the impact of simulation training for instrument knowledge and recognition among neurosurgery residents.
Advances in Virtual Reality (VR) technologies allow the investigation of simulated moral actions in visually immersive environments. Using a robotic manipulandum and an interactive sculpture, we now also incorporate realistic haptic feedback into virtual moral simulations. In two experiments, we found that participants responded with greater utilitarian actions in virtual and haptic environments when compared to traditional questionnaire assessments of moral judgments. In experiment one, when incorporating a robotic manipulandum, we found that the physical power of simulated utilitarian responses (calculated as the product of force and speed) was predicted by individual levels of psychopathy. In experiment two, which integrated an interactive and life-like sculpture of a human into a VR simulation, greater utilitarian actions continued to be observed. Together, these results support a disparity between simulated moral action and moral judgment. Overall this research combines state-of-the-art virtual reality, robotic movement simulations, and realistic human sculptures, to enhance moral paradigms that are often contextually impoverished. As such, this combination provides a better assessment of simulated moral action, and illustrates the embodied nature of morally-relevant actions.
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in ‘growing degree days’-a measure of the accumulated warmth above five degrees Celsius per year-of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that climate models can adequately simulate climates for periods other than the present-day. They also demonstrate that amplified warming in recent decades increased temperatures above the mean of any century during the past 11,000 years.