We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in
the context of Reinforcement Learning (RL). SAC-X enables learning of complex
behaviors - from scratch - in the presence of multiple sparse reward signals.
To this end, the agent is equipped with a set of general auxiliary tasks, that
it attempts to learn simultaneously via off-policy RL. The key idea behind our
method is that active (learned) scheduling and execution of auxiliary policies
allows the agent to efficiently explore its environment - enabling it to excel
at sparse reward RL. Our experiments in several challenging robotic
manipulation settings demonstrate the power of our approach.