Machine learning algorithms based on parametrized quantum circuits are a prime candidate for near-term applications on noisy quantum computers.
With the advent of real-world quantum computing, the idea that parametrized quantum computations can be used as hypothesis families in a quantum-classical machine learning system is gaining increasing traction.
Social insect colonies routinely face large vertebrate predators, against which they need to mount a collective defense.
Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics.
To make progress in science, we often build abstract representations of physical systems that meaningfully encode information about the systems.
In the past decade, the field of quantum machine learning has drawn significant attention due to the prospect of bringing genuine computational advantages to now widespread algorithmic methods.
Specifically, we prove that one version of the projective simulation model, understood as a reinforcement learning approach, converges to optimal behavior in a large class of Markov decision processes.
According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is both a necessary and a sufficient condition for the presence of genuine thought.
The last decade has seen an unprecedented growth in artificial intelligence and photonic technologies, both of which drive the limits of modern-day computing devices.
But can it also be used to find novel protocols and algorithms for applications such as large-scale quantum communication?
Using efficient simulations with about 70 data qubits with arbitrary connectivity, we demonstrate that such a reinforcement learning agent can determine near-optimal solutions, in terms of the number of data qubits, for various error models of interest.
Projective simulation (PS) is a model for intelligent agents with a deliberation capacity that is based on episodic memory.
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals.
We report a proof-of-principle experimental demonstration of the quantum speed-up for learning agents utilizing a small-scale quantum information processor based on radiofrequency-driven trapped ions.
We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence.
Within our approach, we tackle the problem of quantum enhancements in reinforcement learning as well, and propose a systematic scheme for providing improvements.
The extended model is examined on three different kinds of reinforcement learning tasks, in which the agent has different optimal values of the meta-parameters, and is shown to perform well, reaching near-optimal to optimal success rates in all of them, without ever needing to manually adjust any meta-parameter.
We consider a general class of models, where a reinforcement learning (RL) agent learns from cyclic interactions with an external environment via classical signals.
Specifically, we show that already in basic (but extreme) environments, learning without generalization may be impossible, and demonstrate how the presented generalization machinery enables the projective simulation agent to learn.
Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization.
We compare the performance of the PS agent model with those of existing models and show that the PS agent exhibits competitive performance also in such scenarios.
We study the model of projective simulation (PS), a novel approach to artificial intelligence based on stochastic processing of episodic memory which was recently introduced [H. J.