no code implementations • 18 Jul 2023 • Ettore Randazzo, Alexander Mordvintsev
Finally, we show how to perform interactive evolution, where the user decides how to evolve a plant model interactively and then deploys it in a larger environment.
2 code implementations • 19 Feb 2023 • Ettore Randazzo, Alexander Mordvintsev, Craig Fouts
Neural Cellular Automata (NCA) models have shown remarkable capacity for pattern formation and complex global behaviors stemming from local coordination.
no code implementations • 6 Feb 2023 • Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson
We present a differentiable formulation of abstract chemical reaction networks (CRNs) that can be trained to solve a variety of computational tasks.
1 code implementation • 15 Dec 2022 • Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov
We start by providing a simple weight construction that shows the equivalence of data transformations induced by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a regression loss.
no code implementations • 3 May 2022 • Alexander Mordvintsev, Ettore Randazzo, Craig Fouts
Modeling the ability of multicellular organisms to build and maintain their bodies through local interactions between individual cells (morphogenesis) is a long-standing challenge of developmental biology.
no code implementations • 22 Jun 2021 • Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson
Reaction-Diffusion (RD) systems provide a computational framework that governs many pattern formation processes in nature.
3 code implementations • 15 May 2021 • Alexander Mordvintsev, Eyvind Niklasson, Ettore Randazzo
Neural Cellular Automata (NCA) have shown a remarkable ability to learn the required rules to "grow" images, classify morphologies, segment images, as well as to do general computation such as path-finding.
no code implementations • 11 Aug 2020 • Mark Sandler, Andrey Zhmoginov, Liangcheng Luo, Alexander Mordvintsev, Ettore Randazzo, Blaise Agúera y Arcas
The update rule is applied repeatedly in parallel to a large random subset of cells and after convergence is used to produce segmentation masks that are then back-propagated to learn the optimal update rules using standard gradient descent methods.
2 code implementations • 2 Jul 2020 • Ettore Randazzo, Eyvind Niklasson, Alexander Mordvintsev
We present a novel method for learning the weights of an artificial neural network - a Message Passing Learning Protocol (MPLP).