We study discrete time dynamical systems governed by the state equation
$h_{t+1}=\phi(Ah_t+Bu_t)$. Here $A,B$ are weight matrices, $\phi$ is an
activation function, and $u_t$ is the input data...This relation is the backbone
of recurrent neural networks (e.g. LSTMs) which have broad applications in
sequential learning tasks. We utilize stochastic gradient descent to learn the
weight matrices from a finite input/state trajectory $(u_t,h_t)_{t=0}^N$. We
prove that SGD estimate linearly converges to the ground truth weights while
using near-optimal sample size. Our results apply to increasing activations
whose derivatives are bounded away from zero. The analysis is based on i) a
novel SGD convergence result with nonlinear activations and ii) careful
statistical characterization of the state vector. Numerical experiments verify
the fast convergence of SGD on ReLU and leaky ReLU in consistence with our
theory.(read more)