1 code implementation • 28 Mar 2024 • Ole Hall, Anil Yaman
In this work, we first employ GANs that are trained to produce creative images using an architecture known as Creative Adversarial Networks (CANs), then, we employ an evolutionary approach to navigate within the latent space of the models to discover images.
1 code implementation • 18 Sep 2023 • Corinna Triebold, Anil Yaman
Neuro-evolutionary methods have proven effective in addressing a wide range of tasks.
no code implementations • 10 Aug 2022 • Anil Yaman, Joel Z. Leibo, Giovanni Iacca, Sang Wan Lee
Here we show that by introducing a model of social norms, which we regard as emergent patterns of decentralized social sanctioning, it becomes possible for groups of self-interested individuals to learn a productive division of labor involving all critical roles.
no code implementations • 27 Apr 2022 • Anil Yaman, Tim Van der Lee, Giovanni Iacca
With the advent of cheap, miniaturized electronics, ubiquitous networking has reached an unprecedented level of complexity, scale and heterogeneity, becoming the core of several modern applications such as smart industry, smart buildings and smart cities.
1 code implementation • 18 Jun 2021 • Anil Yaman, Nicolas Bredeche, Onur Çaylak, Joel Z. Leibo, Sang Wan Lee
Based on these findings, we hypothesized that meta-control of individual and social learning strategies provides effective and sample-efficient learning in volatile and uncertain environments.
no code implementations • 31 Mar 2021 • Ahmed Hallawa, Anil Yaman, Giovanni Iacca, Gerd Ascheid
Notably, the KIEA framework is EA-agnostic (i. e., it works with any evolutionary algorithm), problem-independent (i. e., it is not dedicated to a specific type of problems), expandable (i. e., its knowledge base can grow over time).
3 code implementations • 24 Jun 2020 • Shiwei Liu, Tim Van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
However, comparing different sparse topologies and determining how sparse topologies evolve during training, especially for the situation in which the sparse structure optimization is involved, remain as challenging open questions.
no code implementations • 28 Mar 2020 • Anil Yaman, Giovanni Iacca
In several network problems the optimum behavior of the agents (i. e., the nodes of the network) is not known before deployment.
no code implementations • 10 Feb 2020 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
A learning process with the plasticity property often requires reinforcement signals to guide the process.
no code implementations • 2 Apr 2019 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, Matt Coler, George Fletcher, Mykola Pechenizkiy
Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons.
no code implementations • 22 Mar 2019 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i. e. rules that update synapses based on the neuron activations and reinforcement signals.
no code implementations • 19 Apr 2018 • Anil Yaman, Decebal Constantin Mocanu, Giovanni Iacca, George Fletcher, Mykola Pechenizkiy
Many real-world control and classification tasks involve a large number of features.