no code implementations • 25 Mar 2024 • Borja Rodríguez-Gálvez, Omar Rivasplata, Ragnar Thobaben, Mikael Skoglund
Moreover, the paper derives a high-probability PAC-Bayes bound for losses with a bounded variance.
no code implementations • 21 Jun 2023 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
Firstly, for losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values.
no code implementations • 27 Dec 2022 • Mahdi Haghifam, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund, Daniel M. Roy, Gintare Karolina Dziugaite
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization.
no code implementations • 3 Feb 2021 • Serkan Sarıtaş, Photios A. Stavrou, Ragnar Thobaben, Mikael Skoglund
Regarding the Nash equilibrium, we explicitly characterize affine equilibria for the single-stage setup and show that the optimal encoder (resp.
Optimization and Control Information Theory Information Theory
no code implementations • NeurIPS 2021 • Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund
This work presents several expected generalization error bounds based on the Wasserstein distance.
no code implementations • 21 Oct 2020 • Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund
In this work, we unify several expected generalization error bounds based on random subsets using the framework developed by Hellstr\"om and Durisi [1].
1 code implementation • 17 Jun 2020 • Dong Liu, Ragnar Thobaben, Lars K. Rasmussen
We term our model Region-based Energy Neural Network (RENN).
2 code implementations • 11 Jun 2020 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
In this article, we propose a new variational approach to learn private and/or fair representations.
no code implementations • 14 May 2020 • Henrik Forssell, Ragnar Thobaben
However, with PLA-aware attack strategies, an attacker can maximize the probability of successfully impersonating the legitimate devices.
2 code implementations • 25 Nov 2019 • Borja Rodríguez Gálvez, Ragnar Thobaben, Mikael Skoglund
In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate $r$ for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes.