Scientific Reports (Nov 2023)
The role of long-term power-law memory in controlling large-scale dynamical networks
Abstract
Abstract Controlling large-scale dynamical networks is crucial to understand and, ultimately, craft the evolution of complex behavior. While broadly speaking we understand how to control Markov dynamical networks, where the current state is only a function of its previous state, we lack a general understanding of how to control dynamical networks whose current state depends on states in the distant past (i.e. long-term memory). Therefore, we require a different way to analyze and control the more prevalent long-term memory dynamical networks. Herein, we propose a new approach to control dynamical networks exhibiting long-term power-law memory dependencies. Our newly proposed method enables us to find the minimum number of driven nodes (i.e. the state vertices in the network that are connected to one and only one input) and their placement to control a long-term power-law memory dynamical network given a specific time-horizon, which we define as the ‘time-to-control’. Remarkably, we provide evidence that long-term power-law memory dynamical networks require considerably fewer driven nodes to steer the network’s state to a desired goal for any given time-to-control as compared with Markov dynamical networks. Finally, our method can be used as a tool to determine the existence of long-term memory dynamics in networks.