Paper

Taking gradients through experiments: LSTMs and memory proximal policy optimization for black-box quantum control

In this work we introduce the application of black-box quantum control as an interesting rein- forcement learning problem to the machine learning community. We analyze the structure of the reinforcement learning problems arising in quantum physics and argue that agents parameterized by long short-term memory (LSTM) networks trained via stochastic policy gradients yield a general method to solving them. In this context we introduce a variant of the proximal policy optimization (PPO) algorithm called the memory proximal policy optimization (MPPO) which is based on this analysis. We then show how it can be applied to specific learning tasks and present results of nu- merical experiments showing that our method achieves state-of-the-art results for several learning tasks in quantum control with discrete and continouous control parameters.

Results in Papers With Code
(↓ scroll down to see all results)