Search Results for author: Frank Schneider

Found 8 papers, 5 papers with code

Efficient Weight-Space Laplace-Gaussian Filtering and Smoothing for Sequential Deep Learning

no code implementations9 Oct 2024 Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig

Efficiently learning a sequence of related tasks, such as in continual learning, poses a significant challenge for neural nets due to the delicate trade-off between catastrophic forgetting and loss of plasticity.

Bayesian Inference Continual Learning

Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures

no code implementations NeurIPS 2023 Runa Eschenhagen, Alexander Immer, Richard E. Turner, Frank Schneider, Philipp Hennig

In this work, we identify two different settings of linear weight-sharing layers which motivate two flavours of K-FAC -- $\textit{expand}$ and $\textit{reduce}$.

Graph Neural Network

Accelerating Generalized Linear Models by Trading off Computation for Uncertainty

no code implementations31 Oct 2023 Lukas Tatzel, Jonathan Wenger, Frank Schneider, Philipp Hennig

Bayesian Generalized Linear Models (GLMs) define a flexible probabilistic framework to model categorical, ordinal and continuous data, and are widely used in practice.

Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers

1 code implementation1 Jan 2021 Robin Marc Schmidt, Frank Schneider, Philipp Hennig

(iii) While we can not discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific algorithms and parameter choices that generally lead to competitive results in our experiments.

Benchmarking Deep Learning

Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers

1 code implementation3 Jul 2020 Robin M. Schmidt, Frank Schneider, Philipp Hennig

Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one.

Benchmarking Deep Learning

DeepOBS: A Deep Learning Optimizer Benchmark Suite

1 code implementation ICLR 2019 Frank Schneider, Lukas Balles, Philipp Hennig

We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization.

Benchmarking Deep Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.