Reinforcement learning (RL) agents improve through trial-and-error, but when
reward is sparse and the agent cannot discover successful action sequences,
learning stagnates. This has been a notable problem in training deep RL agents
to perform web-based tasks, such as booking flights or replying to emails,
where a single mistake can ruin the entire sequence of actions. A common remedy
is to "warm-start" the agent by pre-training it to mimic expert demonstrations,
but this is prone to overfitting. Instead, we propose to constrain exploration
using demonstrations. From each demonstration, we induce high-level "workflows"
which constrain the allowable actions at each time step to be similar to those
in the demonstration (e.g., "Step 1: click on a textbox; Step 2: enter some
text"). Our exploration policy then learns to identify successful workflows and
samples actions that satisfy these workflows. Workflows prune out bad
exploration directions and accelerate the agent's ability to discover rewards.
We use our approach to train a novel neural policy designed to handle the
semi-structured nature of websites, and evaluate on a suite of web tasks,
including the recent World of Bits benchmark. We achieve new state-of-the-art
results, and show that workflow-guided exploration improves sample efficiency
over behavioral cloning by more than 100x.