Concept: Complex system
Whilst being hailed as the remedy to the world’s ills, cities will need to adapt in the 21(st) century. In particular, the role of public transport is likely to increase significantly, and new methods and technics to better plan transit systems are in dire need. This paper examines one fundamental aspect of transit: network centrality. By applying the notion of betweenness centrality to 28 worldwide metro systems, the main goal of this paper is to study the emergence of global trends in the evolution of centrality with network size and examine several individual systems in more detail. Betweenness was notably found to consistently become more evenly distributed with size (i.e. no “winner takes all”) unlike other complex network properties. Two distinct regimes were also observed that are representative of their structure. Moreover, the share of betweenness was found to decrease in a power law with size (with exponent 1 for the average node), but the share of most central nodes decreases much slower than least central nodes (0.87 vs. 2.48). Finally the betweenness of individual stations in several systems were examined, which can be useful to locate stations where passengers can be redistributed to relieve pressure from overcrowded stations. Overall, this study offers significant insights that can help planners in their task to design the systems of tomorrow, and similar undertakings can easily be imagined to other urban infrastructure systems (e.g., electricity grid, water/wastewater system, etc.) to develop more sustainable cities.
Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning-what you think I think you think-will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems.
Emotions are evolved systems of intra- and interpersonal processes that are regulatory in nature, dealing mostly with issues of personal or social concern. They regulate social interaction and in extension, the social sphere. In turn, processes in the social sphere regulate emotions of individuals and groups. In other words, intrapersonal processes project in the interpersonal space, and inversely, interpersonal experiences deeply influence intrapersonal processes. Thus, I argue that the concepts of emotion generation and regulation should not be artificially separated. Similarly, interpersonal emotions should not be reduced to interacting systems of intraindividual processes. Instead, we can consider emotions at different social levels, ranging from dyads to large scale e-communities. The interaction between these levels is complex and does not only involve influences from one level to the next. In this sense the levels of emotion/regulation are messy and a challenge for empirical study. In this article, I discuss the concepts of emotions and regulation at different intra- and interpersonal levels. I extend the concept of auto-regulation of emotions (Kappas, 2008, 2011a,b) to social processes. Furthermore, I argue for the necessity of including mediated communication, particularly in cyberspace in contemporary models of emotion/regulation. Lastly, I suggest the use of concepts from systems dynamics and complex systems to tackle the challenge of the “messy layers.”
The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is [Formula: see text]0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.
We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of “source” species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the “target” species) with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.
Many fields of basic and applied science require efficiently exploring complex systems with high dimensionality. An example of such a challenge is optimising the performance of plasma fusion experiments. The highly-nonlinear and temporally-varying interaction between the plasma, its environment and external controls presents a considerable complexity in these experiments. A further difficulty arises from the fact that there is no single objective metric that fully captures both plasma quality and equipment constraints. To efficiently optimise the system, we develop the Optometrist Algorithm, a stochastic perturbation method combined with human choice. Analogous to getting an eyeglass prescription, the Optometrist Algorithm confronts a human operator with two alternative experimental settings and associated outcomes. A human operator then chooses which experiment produces subjectively better results. This innovative technique led to the discovery of an unexpected record confinement regime with positive net heating power in a field-reversed configuration plasma, characterised by a >50% reduction in the energy loss rate and concomitant increase in ion temperature and total plasma energy.
Global warming has increased the frequency of extreme climate events, yet responses of biological and human communities are poorly understood, particularly for aquatic ecosystems and fisheries. Retrospective analysis of known outcomes may provide insights into the nature of adaptations and trajectory of subsequent conditions. We consider the 1815 eruption of the Indonesian volcano Tambora and its impact on Gulf of Maine (GoM) coastal and riparian fisheries in 1816. Applying complex adaptive systems theory with historical methods, we analyzed fish export data and contemporary climate records to disclose human and piscine responses to Tambora’s extreme weather at different spatial and temporal scales while also considering sociopolitical influences. Results identified a tipping point in GoM fisheries induced by concatenating social and biological responses to extreme weather. Abnormal daily temperatures selectively affected targeted fish species-alewives, shad, herring, and mackerel-according to their migration and spawning phenologies and temperature tolerances. First to arrive, alewives suffered the worst. Crop failure and incipient famine intensified fishing pressure, especially in heavily settled regions where dams already compromised watersheds. Insufficient alewife runs led fishers to target mackerel, the next species appearing in abundance along the coast; thus, 1816 became the “mackerel year.” Critically, the shift from riparian to marine fisheries persisted and expanded after temperatures moderated and alewives recovered. We conclude that contingent human adaptations to extraordinary weather permanently altered this complex system. Understanding how adaptive responses to extreme events can trigger unintended consequences may advance long-term planning for resilience in an uncertain future.
Insect societies are complex systems, displaying emergent properties much greater than the sum of their individual parts. As such, the concept of these societies as single ‘superorganisms’ is widely applied to describe their organisation and biology. Here, we test the applicability of this concept to the response of social insect colonies to predation during a vulnerable period of their life history. We used the model system of house-hunting behaviour in the ant Temnothorax albipennis. We show that removing individuals from directly within the nest causes an evacuation response, while removing ants at the periphery of scouting activity causes the colony to withdraw back into the nest. This suggests that colonies react differentially, but in a coordinated fashion, to these differing types of predation. Our findings lend support to the superorganism concept, as the whole society reacts much like a single organism would in response to attacks on different parts of its body. The implication of this is that a collective reaction to the location of worker loss within insect colonies is key to avoiding further harm, much in the same way that the nervous systems of individuals facilitate the avoidance of localised damage.
Resilience, a system’s ability to adjust its activity to retain its basic functionality when errors, failures and environmental changes occur, is a defining property of many complex systems. Despite widespread consequences for human health, the economy and the environment, events leading to loss of resilience–from cascading failures in technological systems to mass extinctions in ecological networks–are rarely predictable and are often irreversible. These limitations are rooted in a theoretical gap: the current analytical framework of resilience is designed to treat low-dimensional models with a few interacting components, and is unsuitable for multi-dimensional systems consisting of a large number of components that interact through a complex network. Here we bridge this theoretical gap by developing a set of analytical tools with which to identify the natural control and state parameters of a multi-dimensional complex system, helping us derive effective one-dimensional dynamics that accurately predict the system’s resilience. The proposed analytical framework allows us systematically to separate the roles of the system’s dynamics and topology, collapsing the behaviour of different networks onto a single universal resilience function. The analytical results unveil the network characteristics that can enhance or diminish resilience, offering ways to prevent the collapse of ecological, biological or economic systems, and guiding the design of technological systems resilient to both internal failures and environmental changes.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 5 years ago
A quantitative description of a complex system is inherently limited by our ability to estimate the system’s internal state from experimentally accessible outputs. Although the simultaneous measurement of all internal variables, like all metabolite concentrations in a cell, offers a complete description of a system’s state, in practice experimental access is limited to only a subset of variables, or sensors. A system is called observable if we can reconstruct the system’s complete internal state from its outputs. Here, we adopt a graphical approach derived from the dynamical laws that govern a system to determine the sensors that are necessary to reconstruct the full internal state of a complex system. We apply this approach to biochemical reaction systems, finding that the identified sensors are not only necessary but also sufficient for observability. The developed approach can also identify the optimal sensors for target or partial observability, helping us reconstruct selected state variables from appropriately chosen outputs, a prerequisite for optimal biomarker design. Given the fundamental role observability plays in complex systems, these results offer avenues to systematically explore the dynamics of a wide range of natural, technological and socioeconomic systems.