Search Results for author: Alexey A. Melnikov

Found 10 papers, 2 papers with code

Quantum walk processes in quantum devices

1 code implementation28 Dec 2020 Anandu Kalleri Madhu, Alexey A. Melnikov, Leonid E. Fedichkin, Alexander Alodjants, Ray-Kuang Lee

Firstly, we discuss ways to obtain graphs provided quantum circuit.

Quantum Physics

Setting up experimental Bell test with reinforcement learning

no code implementations4 May 2020 Alexey A. Melnikov, Pavel Sekatski, Nicolas Sangouard

Finding optical setups producing measurement results with a targeted probability distribution is hard as a priori the number of possible experimental implementations grows exponentially with the number of modes and the number of devices.

reinforcement-learning Reinforcement Learning +1

Machine learning for long-distance quantum communication

2 code implementations24 Apr 2019 Julius Wallnöfer, Alexey A. Melnikov, Wolfgang Dür, Hans J. Briegel

But can it also be used to find novel protocols and algorithms for applications such as large-scale quantum communication?

BIG-bench Machine Learning Decision Making +1

Predicting quantum advantage by quantum walk with convolutional neural networks

no code implementations30 Jan 2019 Alexey A. Melnikov, Leonid E. Fedichkin, Alexander Alodjants

Quantum walks on graphs are fundamentally different from classical random walks analogs, in particular, they walk faster than classical ones on certain graphs, enabling in these cases quantum algorithmic applications and quantum-enhanced energy transfer.

Benchmarking projective simulation in navigation problems

no code implementations23 Apr 2018 Alexey A. Melnikov, Adi Makmal, Hans J. Briegel

Projective simulation (PS) is a model for intelligent agents with a deliberation capacity that is based on episodic memory.

Benchmarking Q-Learning +3

Active learning machine learns to create new quantum experiments

no code implementations2 Jun 2017 Alexey A. Melnikov, Hendrik Poulsen Nautrup, Mario Krenn, Vedran Dunjko, Markus Tiersch, Anton Zeilinger, Hans J. Briegel

We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence.

Active Learning

Meta-learning within Projective Simulation

no code implementations25 Feb 2016 Adi Makmal, Alexey A. Melnikov, Vedran Dunjko, Hans J. Briegel

The extended model is examined on three different kinds of reinforcement learning tasks, in which the agent has different optimal values of the meta-parameters, and is shown to perform well, reaching near-optimal to optimal success rates in all of them, without ever needing to manually adjust any meta-parameter.

Meta-Learning reinforcement-learning +2

Projective simulation with generalization

no code implementations9 Apr 2015 Alexey A. Melnikov, Adi Makmal, Vedran Dunjko, Hans J. Briegel

Specifically, we show that already in basic (but extreme) environments, learning without generalization may be impossible, and demonstrate how the presented generalization machinery enables the projective simulation agent to learn.

Reinforcement Learning

Projective simulation applied to the grid-world and the mountain-car problem

no code implementations21 May 2014 Alexey A. Melnikov, Adi Makmal, Hans J. Briegel

We compare the performance of the PS agent model with those of existing models and show that the PS agent exhibits competitive performance also in such scenarios.

Benchmarking reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.