KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss

22 Feb 2021  ·  Yuhan Chen, Takashi Matsubara, Takaharu Yaguchi ·

Many physical phenomena are described by Hamiltonian mechanics using an energy function (the Hamiltonian). Recently, the Hamiltonian neural network, which approximates the Hamiltonian as a neural network, and its extensions have attracted much attention. This is a very powerful method, but its use in theoretical studies remains limited. In this study, by combining the statistical learning theory and Kolmogorov-Arnold-Moser (KAM) theory, we provide a theoretical analysis of the behavior of Hamiltonian neural networks when the learning error is not completely zero. A Hamiltonian neural network with non-zero errors can be considered as a perturbation from the true dynamics, and the perturbation theory of the Hamilton equation is widely known as the KAM theory. To apply the KAM theory, we provide a generalization error bound for Hamiltonian neural networks by deriving an estimate of the covering number of the gradient of the multi-layer perceptron, which is the key ingredient of the model. This error bound gives an $L^\infty$ bound on the Hamiltonian that is required in the application of the KAM theory.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here