Many applications in speech, robotics, finance, and biology deal with
sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions...
To model such structure, we propose expressive closed-form kernel functions for
Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the
inductive biases of long short-term memory (LSTM) recurrent networks, while
retaining the non-parametric probabilistic advantages of Gaussian processes. We
learn the properties of the proposed kernels by optimizing the Gaussian process
marginal likelihood using a new provably convergent semi-stochastic gradient
procedure and exploit the structure of these kernels for scalable training and
prediction. This approach provides a practical representation for Bayesian
LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and
thoroughly investigate a consequential autonomous driving application, where
the predictive uncertainties provided by GP-LSTM are uniquely valuable.