Search Results for author: Natalia Martinez

Found 12 papers, 3 papers with code

Learning to Collaborate for User-Controlled Privacy

no code implementations18 May 2018 Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro

As such, users and utility providers should collaborate in data privacy, a paradigm that has not yet been developed in the privacy research community.

Non-contact photoplethysmogram and instantaneous heart rate estimation from infrared face video

1 code implementation14 Feb 2019 Natalia Martinez, Martin Bertran, Guillermo Sapiro, Hau-Tieng Wu

One way to avoid these constraints is using infrared cameras, allowing the monitoring of iHR under low light conditions.

Heart rate estimation

Learning data-derived privacy preserving representations from information metrics

no code implementations ICLR 2019 Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro

We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms.

Attribute Face Recognition +1

Pareto Optimality in No-Harm Fairness

no code implementations25 Sep 2019 Natalia Martinez, Martin Bertran, Guillermo Sapiro

Common fairness definitions in machine learning focus on balancing various notions of disparity and utility.

Fairness

Fairness With Minimal Harm: A Pareto-Optimal Approach For Healthcare

no code implementations16 Nov 2019 Natalia Martinez, Martin Bertran, Guillermo Sapiro

Common fairness definitions in machine learning focus on balancing notions of disparity and utility.

Fairness

Instance based Generalization in Reinforcement Learning

1 code implementation2 Nov 2020 Martin Bertran, Natalia Martinez, Mariano Phielipp, Guillermo Sapiro

Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels.

Generalization Bounds reinforcement-learning +1

Minimax Pareto Fairness: A Multi Objective Perspective

1 code implementation ICML 2020 Natalia Martinez, Martin Bertran, Guillermo Sapiro

In this work we formulate and formally characterize group fairness as a multi-objective optimization problem, where each sensitive group risk is a separate objective.

Classification Fairness +1

Instance-based Generalization in Reinforcement Learning

no code implementations NeurIPS 2020 Martin Bertran, Natalia Martinez, Mariano Phielipp, Guillermo Sapiro

Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels.

Generalization Bounds reinforcement-learning +1

Blind Pareto Fairness and Subgroup Robustness

no code implementations1 Jan 2021 Natalia Martinez, Martin Bertran, Afroditi Papadaki, Miguel R. D. Rodrigues, Guillermo Sapiro

With the wide adoption of machine learning algorithms across various application domains, there is a growing interest in the fairness properties of such algorithms.

Fairness

Federating for Learning Group Fair Models

no code implementations5 Oct 2021 Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues

Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.

Fairness Federated Learning

Minimax Demographic Group Fairness in Federated Learning

no code implementations20 Jan 2022 Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues

Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.

Fairness Federated Learning

Federated Fairness without Access to Sensitive Groups

no code implementations22 Feb 2024 Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues

Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.

Fairness Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.