SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Computing

379

Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting. Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers.

Concepts: Nervous system, Brain, Cerebral cortex, Computer, Computation, Computer science, Electrical engineering, Computing

240

Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don’t know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010.

Concepts: Scientific method, Science, Research, Management, Computer, Computer science, Computing, Computational science

203

Driven by advances in materials and computer science, researchers are attempting to design systems where the computer and material are one and the same entity. Using theoretical and computational modeling, we design a hybrid material system that can autonomously transduce chemical, mechanical, and electrical energy to perform a computational task in a self-organized manner, without the need for external electrical power sources. Each unit in this system integrates a self-oscillating gel, which undergoes the Belousov-Zhabotinsky (BZ) reaction, with an overlaying piezoelectric (PZ) cantilever. The chemomechanical oscillations of the BZ gels deflect the PZ layer, which consequently generates a voltage across the material. When these BZ-PZ units are connected in series by electrical wires, the oscillations of these units become synchronized across the network, where the mode of synchronization depends on the polarity of the PZ. We show that the network of coupled, synchronizing BZ-PZ oscillators can perform pattern recognition. The “stored” patterns are set of polarities of the individual BZ-PZ units, and the “input” patterns are coded through the initial phase of the oscillations imposed on these units. The results of the modeling show that the input pattern closest to the stored pattern exhibits the fastest convergence time to stable synchronization behavior. In this way, networks of coupled BZ-PZ oscillators achieve pattern recognition. Further, we show that the convergence time to stable synchronization provides a robust measure of the degree of match between the input and stored patterns. Through these studies, we establish experimentally realizable design rules for creating “materials that compute.”

Concepts: Mathematics, Science, Computer, Computation, Computer science, Electrical engineering, Computing, Computational science

176

In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.

Concepts: Vacuum tube, Integrated circuit, Oscillation, Computer, Computation, Computer science, Computing, Electronics

171

Background Genome-wide association studies have become very popular in identifyinggenetic contributions to phenotypes. Millions of SNPs are being tested fortheir association with diseases and traits using linear or logistic regression models.This conceptually simple strategy encounters the following computational issues: a largenumber of tests and very large genotype files (many Gigabytes) which cannot bedirectly loaded into the software memory. One of the solutions applied on agrand scale is cluster computing involving large-scale resources.We show how to speed up the computations using matrix operations in pure R code.Results We improve speed: computation time from 6 hours is reduced to 10-15 minutes.Our approach can handle essentially an unlimited amount of covariates efficiently, using projections. Data files in GWAS are vast and reading them intocomputer memory becomes an important issue. However, much improvement can bemade if the data is structured beforehand in a way allowing for easy access to blocks ofSNPs. We propose several solutions based on the R packages ff and ncdf.We adapted the semi-parallel computations for logistic regression.We show that in a typical GWAS setting, where SNP effects are very small, we do not lose any precision and our computations are few hundreds times faster than standard procedures.Conclusions We provide very fast algorithms for GWAS written in pure R code. We also showhow to rearrange SNP data for fast access.

Concepts: Logistic regression, Data, Genome-wide association study, Computer, Computation, Computer science, Computing, Computational complexity theory

167

This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/.

Concepts: Biology, Computer, Computer program, Computer science, Computing, Personal computer, Central processing unit, Computer architecture

167

BACKGROUND: Global network alignment has been proposed as an effective tool for computing functional orthology. Commonly used global alignment techniques such as IsoRank rely on a two-step process: the first step is an iterative diffusion-based approach for assigning similarity scores to all possible node pairs (matchings); the second step applies a maximum-weight bipartite matching algorithm to this similarity score matrix to identify orthologous node pairs. While demonstrably successful in identifying orthologies beyond those based on sequences, this two-step process is computationally expensive. Recent work on computation of node-pair similarity matrices has demonstrated that the computational cost of the first step can be significantly reduced. The use of these accelerated methods renders the bipartite matching step as the dominant computational cost. This motivates a critical assessment of the tradeoffs of computational cost and solution quality (matching quality, topological matches, and biological significance) associated with the bipartite matching step. In this paper we utilize the state-of-the-art core diffusion-based step in IsoRank for similarity matrix computation, and couple it with two heuristic bipartite matching algorithms - a matrix-based greedy approach, and a tunable, adaptive, auction-based matching algorithm developed by us. We then compare our implementations against the performance and quality characteristics of the solution produced by the reference IsoRank binary, which also implements an optimal matching algorithm. RESULTS: Using heuristic matching algorithms in the IsoRank pipeline exhibits dramatic speedup improvements; typically x30 times faster for the total alignment process in most cases of interest. More surprisingly, these improvements in compute times are typically accompanied by better or comparable topological and biological quality for the network alignments generated. These measures are quantified by the number of conserved edges in the alignment graph, the percentage of enriched components, and the total number of covered Gene Ontology (GO) terms. CONCLUSIONS: We have demonstrated significant reductions in global network alignment computation times by coupling heuristic bipartite matching methods with the similarity scoring step of the IsoRank procedure. Our heuristic matching techniques maintain comparable - if not better - quality in resulting alignments. A consequence of our work is that network-alignment based orthologies can be computed within minutes (as compared to hours) on typical protein interaction networks, enabling a more comprehensive tuning of alignment parameters for refined orthologies.

Concepts: Algorithm, Bioinformatics, Computer, Graph theory, Computer science, Computing, Computational complexity theory, Matching

91

Poker is a family of games that exhibit imperfect information, where players do not have full knowledge of past events. Whereas many perfect-information games have been solved (e.g., Connect Four and checkers), no nontrivial imperfect-information game played competitively by humans has previously been solved. Here, we announce that heads-up limit Texas hold'em is now essentially weakly solved. Furthermore, this computation formally proves the common wisdom that the dealer in the game holds a substantial advantage. This result was enabled by a new algorithm, CFR(+), which is capable of solving extensive-form games orders of magnitude larger than previously possible.

Concepts: Science, Computer, Computer science, Computing

57

Similarity search-for example, identifying similar images in a database or similar documents on the web-is a fundamental computing problem faced by large-scale information retrieval systems. We discovered that the fruit fly olfactory circuit solves this problem with a variant of a computer science algorithm (called locality-sensitive hashing). The fly circuit assigns similar neural activity patterns to similar odors, so that behaviors learned from one odor can be applied when a similar odor is experienced. The fly algorithm, however, uses three computational strategies that depart from traditional approaches. These strategies can be translated to improve the performance of computational similarity searches. This perspective helps illuminate the logic supporting an important sensory function and provides a conceptually new algorithm for solving a fundamental computational problem.

Concepts: Mathematics, Computer, Computation, Logic, Computer science, Computing, Computational complexity theory, Information retrieval

50

Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.

Concepts: Quantum mechanics, Computer, Computation, Computer science, Computing, Computer data storage, John von Neumann, Von Neumann architecture