Search Results for author: Benny Avelin

Found 7 papers, 2 papers with code

Sequential inductive prediction intervals

no code implementations8 Dec 2023 Benny Avelin

In this paper we explore the concept of sequential inductive prediction intervals using theory from sequential testing.

Prediction Intervals

Exploring Singularities in point clouds with the graph Laplacian: An explicit approach

no code implementations31 Dec 2022 Martin Andersson, Benny Avelin

We develop theory and methods that use the graph Laplacian to analyze the geometry of the underlying manifold of point clouds.

Concentration inequalities for leave-one-out cross validation

no code implementations4 Nov 2022 Benny Avelin, Lauri Viitasaari

In this article we prove that estimator stability is enough to show that leave-one-out cross validation is a sound procedure, by providing concentration bounds in a general framework.

Density Estimation regression

Deep limits and cut-off phenomena for neural networks

no code implementations21 Apr 2021 Benny Avelin, Anders Karlsson

We consider dynamical and geometrical aspects of deep learning.

Uncertainty-Aware Body Composition Analysis with Deep Regression Ensembles on UK Biobank MRI

1 code implementation18 Jan 2021 Taro Langner, Fredrik K. Gustafsson, Benny Avelin, Robin Strand, Håkan Ahlström, Joel Kullberg

The results indicate that deep regression ensembles could ultimately provide automated, uncertainty-aware measurements of body composition for more than 120, 000 UK Biobank neck-to-knee body MRI that are to be acquired within the coming years.

regression Uncertainty Quantification

Approximation of BV functions by neural networks: A regularity theory approach

no code implementations15 Dec 2020 Benny Avelin, Vesa Julin

We first study the convergence to equilibrium of the stochastic gradient flow associated with the cost function with a quadratic penalization.

Neural ODEs as the Deep Limit of ResNets with constant weights

2 code implementations arXiv 2019 Benny Avelin, Kaj Nyström

In this paper we prove that, in the deep limit, the stochastic gradient descent on a ResNet type deep neural network, where each layer shares the same weight matrix, converges to the stochastic gradient descent for a Neural ODE and that the corresponding value/loss functions converge.

Cannot find the paper you are looking for? You can Submit a new open access paper.