no code implementations • 4 Sep 2024 • Hartmut Maennel, Oliver T. Unke, Klaus-Robert Müller
When modeling physical properties of molecules with machine learning, it is desirable to incorporate $SO(3)$-covariance.
1 code implementation • 15 Jan 2024 • Oliver T. Unke, Hartmut Maennel
This work introduces E3x, a software package for building neural networks that are equivariant with respect to the Euclidean group $\mathrm{E}(3)$, consisting of translations, rotations, and reflections of three-dimensional space.
no code implementations • 17 May 2022 • Oliver T. Unke, Martin Stöhr, Stefan Ganscha, Thomas Unterthiner, Hartmut Maennel, Sergii Kashubin, Daniel Ahlin, Michael Gastegger, Leonardo Medrano Sandonas, Alexandre Tkatchenko, Klaus-Robert Müller
Molecular dynamics (MD) simulations allow atomistic insights into chemical and biological processes.
no code implementations • 1 Sep 2021 • Ibrahim Alabdulmohsin, Hartmut Maennel, Daniel Keysers
Recent results suggest that reinitializing a subset of the parameters of a neural network during training can improve generalization, particularly for small training sets.
1 code implementation • NeurIPS 2021 • Robert J. N. Baldock, Hartmut Maennel, Behnam Neyshabur
Existing work on understanding deep learning often employs measures that compress all data-dependent information into a few numbers.
no code implementations • NeurIPS 2020 • Hartmut Maennel, Ibrahim Alabdulmohsin, Ilya Tolstikhin, Robert J. N. Baldock, Olivier Bousquet, Sylvain Gelly, Daniel Keysers
We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch even after accounting for simple effects, such as weight scaling.
no code implementations • 31 Mar 2020 • Hartmut Maennel
We generalize this algorithm to the case of sparse probabilities $\beta(w|z)$, in which we only need to assume that the tree width of an "interaction graph" on the observations is limited.
no code implementations • 25 Sep 2019 • Hartmut Maennel, Alexandru Țifrea
A simple method for obtaining uncertainty estimates for Neural Network classifiers (e. g. for out-of-distribution detection) is to use an ensemble of independently trained networks and average the softmax outputs.
no code implementations • NeurIPS 2019 • Hugo Penedones, Carlos Riquelme, Damien Vincent, Hartmut Maennel, Timothy Mann, Andre Barreto, Sylvain Gelly, Gergely Neu
We consider the core reinforcement-learning problem of on-policy value function approximation from a batch of trajectory data, and focus on various issues of Temporal Difference (TD) learning and Monte Carlo (MC) policy evaluation.
no code implementations • 9 Jul 2018 • Hugo Penedones, Damien Vincent, Hartmut Maennel, Sylvain Gelly, Timothy Mann, Andre Barreto
Temporal-Difference learning (TD) [Sutton, 1988] with function approximation can converge to solutions that are worse than those obtained by Monte-Carlo regression, even in the simple case of on-policy evaluation.
no code implementations • 22 Mar 2018 • Hartmut Maennel, Olivier Bousquet, Sylvain Gelly
Deep neural networks are often trained in the over-parametrized regime (i. e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem.