no code implementations • 4 Dec 2023 • Martin Hellkvist, Ayça Özçelikkale, Anders Ahlén
We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time.
no code implementations • 1 Dec 2022 • Martin Hellkvist, Ayça Özçelikkale, Anders Ahlén
Recent successes of massively overparameterized models have inspired a new line of work investigating the underlying conditions that enable overparameterized models to generalize well.
no code implementations • 30 Nov 2022 • Martin Hellkvist, Ayça Özçelikkale, Anders Ahlén
We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks.
no code implementations • 7 Mar 2022 • Martin Hellkvist, Ayça Özçelikkale, Anders Ahlén
Our results show that fake features can significantly improve the estimation performance, even though they are not correlated with the features in the underlying system.
no code implementations • 25 May 2021 • Martin Hellkvist, Ayça Özçelikkale
By modelling the regressors of the underlying system as random variables, we analyze the average behaviour of the mean squared error (MSE).
no code implementations • 22 Jan 2021 • Martin Hellkvist, Ayça Özçelikkale, Anders Ahlén
We provide high-probability bounds on the generalization error for both isotropic and correlated Gaussian data as well as sub-gaussian data.
no code implementations • 30 Apr 2020 • Martin Hellkvist, Ayça Özçelikkale, Anders Ahlén
We consider the setting where the unknowns are distributed over a network of nodes.