Addressing the need for explainable Machine Learning has emerged as one of the most important research directions in modern Artificial Intelligence (AI).
Here we show that by introducing a model of social norms, which we regard as patterns of decentralized social sanctioning, it becomes possible for groups of self-interested individuals to learn a productive division of labor involving all critical roles.
We demonstrate the applicability of this framework on two algorithms, namely Covariance Matrix Adaptation Evolution Strategies (CMA-ES) and Differential Evolution (DE), for which we learn, respectively, adaptation policies for the step-size (for CMA-ES), and the scale factor and crossover rate (for DE).
With the advent of cheap, miniaturized electronics, ubiquitous networking has reached an unprecedented level of complexity, scale and heterogeneity, becoming the core of several modern applications such as smart industry, smart buildings and smart cities.
Finding the most influential nodes in a network is a computationally hard problem with several possible applications in various kinds of network-based problems.
For this reason, several works have applied machine learning techniques, often with the help of special-purpose simulators, to generate policies that were more effective than the ones obtained by governments.
However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process.
Social networks represent nowadays in many contexts the main source of information transmission and the way opinions and actions are influenced.
Several methods able to generate adversarial samples make use of gradients, which usually are not available to an attacker in real-world scenarios.
Designing optimal soft modular robots is difficult, due to non-trivial interactions between morphology and controller.
Notably, the KIEA framework is EA-agnostic (i. e., it works with any evolutionary algorithm), problem-independent (i. e., it is not dedicated to a specific type of problems), expandable (i. e., its knowledge base can grow over time).
We apply this methodology, in silico, to six test cases of urban networks made of hundreds of nodes, and find that GI produces consistent gains in delivery probability in four cases.
We present a two-level optimization scheme that combines the advantages of evolutionary algorithms with the advantages of Q-learning.
Ranked #1 on OpenAI Gym on Cart Pole (OpenAI Gym)
Furthermore, we propose that this distinction is decided by the evolutionary process, thus allowing evo-RL to be adaptive to different environments.
Although very effective, evolutionary algorithms rely heavily on having a large population of individuals (i. e., network architectures) and is therefore memory expensive.
Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons.
Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i. e. rules that update synapses based on the neuron activations and reinforcement signals.
We perform extensive tests of different DOWSN configurations on a benchmark made up of continuous optimization problems; we analyze the influence of the network parameters (number of nodes, inter-node communication period and probability of accepting incoming solutions) on the optimization performance.
Many real-world control and classification tasks involve a large number of features.