Search Results for author: Natalie Dullerud

Found 8 papers, 4 papers with code

CaPC Learning: Confidential and Private Collaborative Learning

1 code implementation ICLR 2021 Christopher A. Choquette-Choo, Natalie Dullerud, Adam Dziedzic, Yunxiang Zhang, Somesh Jha, Nicolas Papernot, Xiao Wang

There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data.

Fairness Federated Learning

Proof-of-Learning: Definitions and Practice

2 code implementations9 Mar 2021 Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot

In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.

An Empirical Framework for Domain Generalization in Clinical Settings

1 code implementation20 Mar 2021 Haoran Zhang, Natalie Dullerud, Laleh Seyyed-Kalantari, Quaid Morris, Shalmali Joshi, Marzyeh Ghassemi

In this work, we benchmark the performance of eight domain generalization methods on multi-site clinical time series and medical imaging data.

Domain Generalization Time Series +1

Reading Race: AI Recognises Patient's Racial Identity In Medical Images

no code implementations21 Jul 2021 Imon Banerjee, Ananth Reddy Bhimireddy, John L. Burns, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, Po-Chih Kuo, Matthew P Lungren, Lyle Palmer, Brandon J Price, Saptarshi Purkayastha, Ayis Pyrros, Luke Oakden-Rayner, Chima Okechukwu, Laleh Seyyed-Kalantari, Hari Trivedi, Ryan Wang, Zachary Zaiman, Haoran Zhang, Judy W Gichoya

Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race.

Improving the Fairness of Chest X-ray Classifiers

1 code implementation23 Mar 2022 Haoran Zhang, Natalie Dullerud, Karsten Roth, Lauren Oakden-Rayner, Stephen Robert Pfohl, Marzyeh Ghassemi

We also find that methods which achieve group fairness do so by worsening performance for all groups.

Fairness

Proof-of-Learning is Currently More Broken Than You Think

no code implementations6 Aug 2022 Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot

They empirically argued the benefit of this approach by showing how spoofing--computing a proof for a stolen model--is as expensive as obtaining the proof honestly by training the model.

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.