no code implementations • 22 Feb 2024 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
no code implementations • 20 Jan 2022 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
no code implementations • 5 Oct 2021 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
no code implementations • 1 Jan 2021 • Natalia Martinez, Martin Bertran, Afroditi Papadaki, Miguel R. D. Rodrigues, Guillermo Sapiro
With the wide adoption of machine learning algorithms across various application domains, there is a growing interest in the fairness properties of such algorithms.
no code implementations • NeurIPS 2020 • Martin Bertran, Natalia Martinez, Mariano Phielipp, Guillermo Sapiro
Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels.
1 code implementation • ICML 2020 • Natalia Martinez, Martin Bertran, Guillermo Sapiro
In this work we formulate and formally characterize group fairness as a multi-objective optimization problem, where each sensitive group risk is a separate objective.
1 code implementation • 2 Nov 2020 • Martin Bertran, Natalia Martinez, Mariano Phielipp, Guillermo Sapiro
Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels.
no code implementations • 16 Nov 2019 • Natalia Martinez, Martin Bertran, Guillermo Sapiro
Common fairness definitions in machine learning focus on balancing notions of disparity and utility.
no code implementations • 25 Sep 2019 • Natalia Martinez, Martin Bertran, Guillermo Sapiro
Common fairness definitions in machine learning focus on balancing various notions of disparity and utility.
no code implementations • ICLR 2019 • Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms.
1 code implementation • 14 Feb 2019 • Natalia Martinez, Martin Bertran, Guillermo Sapiro, Hau-Tieng Wu
One way to avoid these constraints is using infrared cameras, allowing the monitoring of iHR under low light conditions.
no code implementations • 18 May 2018 • Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
As such, users and utility providers should collaborate in data privacy, a paradigm that has not yet been developed in the privacy research community.