Search Results for author: Borja Rodríguez-Gálvez

Found 12 papers, 2 papers with code

Upper Bounds on the Generalization Error of Private Algorithms for Discrete Data

no code implementations12 May 2020 Borja Rodríguez-Gálvez, Germán Bassi, Mikael Skoglund

In this work, we study the generalization capability of algorithms from an information-theoretic perspective.

A Variational Approach to Privacy and Fairness

2 code implementations11 Jun 2020 Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund

In this article, we propose a new variational approach to learn private and/or fair representations.

Fairness Representation Learning

On Random Subset Generalization Error Bounds and the Stochastic Gradient Langevin Dynamics Algorithm

no code implementations21 Oct 2020 Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund

In this work, we unify several expected generalization error bounds based on random subsets using the framework developed by Hellstr\"om and Durisi [1].

Enforcing fairness in private federated learning via the modified method of differential multipliers

no code implementations17 Sep 2021 Borja Rodríguez-Gálvez, Filip Granqvist, Rogier Van Dalen, Matt Seigel

This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices.

BIG-bench Machine Learning Fairness +1

An Information-Theoretic Analysis of Bayesian Reinforcement Learning

no code implementations18 Jul 2022 Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund

Building on the framework introduced by Xu and Raginksy [1] for supervised learning problems, we study the best achievable performance for model-based Bayesian reinforcement learning problems.

reinforcement-learning Reinforcement Learning (RL)

Limitations of Information-Theoretic Generalization Bounds for Gradient Descent Methods in Stochastic Convex Optimization

no code implementations27 Dec 2022 Mahdi Haghifam, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund, Daniel M. Roy, Gintare Karolina Dziugaite

To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization.

Generalization Bounds

Thompson Sampling Regret Bounds for Contextual Bandits with sub-Gaussian rewards

no code implementations26 Apr 2023 Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund

In this work, we study the performance of the Thompson Sampling algorithm for Contextual Bandit problems based on the framework introduced by Neu et al. and their concept of lifted information ratio.

Multi-Armed Bandits Thompson Sampling

More PAC-Bayes bounds: From bounded losses, to losses with general tail behaviors, to anytime-validity

no code implementations21 Jun 2023 Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund

Firstly, for losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values.

valid

The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning

1 code implementation20 Jul 2023 Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella

We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.

Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.