Emergence is a profound subject that straddles many scientific disciplines, including the formation of galaxies and how consciousness arises from the collective activity of neurons.
Complex systems, from the human brain to the global economy, are made of multiple elements that interact in such ways that the behaviour of the `whole' often seems to be more than what is readily explainable in terms of the `sum of the parts.'
The apparent dichotomy between information-processing and dynamical approaches to complexity science forces researchers to choose between two diverging sets of tools and explanations, creating conflict and often hindering scientific progress.
Learning and compression are driven by the common aim of identifying and exploiting statistical regularities in data, which opens the door for fertile collaboration between these areas.
We introduce a novel framework to identify perception-action loops (PALOs) directly from data based on the principles of computational mechanics.
In a more complex Animal-AI environment, our agents (using the same neural architecture) are able to simulate future state transitions and actions (i. e., plan), to evince reward-directed navigation - despite temporary suspension of visual input.
The broad concept of emergence is instrumental in various of the most challenging open scientific questions -- yet, few quantitative theories of what constitutes emergent phenomena have been proposed.
Most information dynamics and statistical causal analysis frameworks rely on the common intuition that causal interactions are intrinsically pairwise -- every 'cause' variable has an associated 'effect' variable, so that a 'causal arrow' can be drawn between them.
Neurons and Cognition Data Analysis, Statistics and Probability
The behavioral dynamics of multi-agent systems have a rich and orderly structure, which can be leveraged to understand these systems, and to improve how artificial agents learn to operate in them.
Recent complexity measures such as integrated information have sought to operationalize this problem taking a whole-versus-parts perspective, wherein one explicitly computes the amount of information generated by a network as a whole over and above that generated by the sum of its parts during state transitions.
We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models.