Search Results for author: Hartmut Maennel

Found 10 papers, 2 papers with code

E3x: $\mathrm{E}(3)$-Equivariant Deep Learning Made Easy

1 code implementation15 Jan 2024 Oliver T. Unke, Hartmut Maennel

This work introduces E3x, a software package for building neural networks that are equivariant with respect to the Euclidean group $\mathrm{E}(3)$, consisting of translations, rotations, and reflections of three-dimensional space.

The Impact of Reinitialization on Generalization in Convolutional Neural Networks

no code implementations1 Sep 2021 Ibrahim Alabdulmohsin, Hartmut Maennel, Daniel Keysers

Recent results suggest that reinitializing a subset of the parameters of a neural network during training can improve generalization, particularly for small training sets.

Generalization Bounds Image Classification +1

Deep Learning Through the Lens of Example Difficulty

1 code implementation NeurIPS 2021 Robert J. N. Baldock, Hartmut Maennel, Behnam Neyshabur

Existing work on understanding deep learning often employs measures that compress all data-dependent information into a few numbers.

What Do Neural Networks Learn When Trained With Random Labels?

no code implementations NeurIPS 2020 Hartmut Maennel, Ibrahim Alabdulmohsin, Ilya Tolstikhin, Robert J. N. Baldock, Olivier Bousquet, Sylvain Gelly, Daniel Keysers

We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch even after accounting for simple effects, such as weight scaling.

Memorization

Exact marginal inference in Latent Dirichlet Allocation

no code implementations31 Mar 2020 Hartmut Maennel

We generalize this algorithm to the case of sparse probabilities $\beta(w|z)$, in which we only need to assume that the tree width of an "interaction graph" on the observations is limited.

Fourier networks for uncertainty estimates and out-of-distribution detection

no code implementations25 Sep 2019 Hartmut Maennel, Alexandru Țifrea

A simple method for obtaining uncertainty estimates for Neural Network classifiers (e. g. for out-of-distribution detection) is to use an ensemble of independently trained networks and average the softmax outputs.

Out-of-Distribution Detection

Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates

no code implementations NeurIPS 2019 Hugo Penedones, Carlos Riquelme, Damien Vincent, Hartmut Maennel, Timothy Mann, Andre Barreto, Sylvain Gelly, Gergely Neu

We consider the core reinforcement-learning problem of on-policy value function approximation from a batch of trajectory data, and focus on various issues of Temporal Difference (TD) learning and Monte Carlo (MC) policy evaluation.

Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem

no code implementations9 Jul 2018 Hugo Penedones, Damien Vincent, Hartmut Maennel, Sylvain Gelly, Timothy Mann, Andre Barreto

Temporal-Difference learning (TD) [Sutton, 1988] with function approximation can converge to solutions that are worse than those obtained by Monte-Carlo regression, even in the simple case of on-policy evaluation.

Gradient Descent Quantizes ReLU Network Features

no code implementations22 Mar 2018 Hartmut Maennel, Olivier Bousquet, Sylvain Gelly

Deep neural networks are often trained in the over-parametrized regime (i. e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.