Search Results for author: Gregory Wornell

Found 8 papers, 0 papers with code

Tighter Expected Generalization Error Bounds via Convexity of Information Measures

no code implementations24 Feb 2022 Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues

Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.

On the Benefits of Selectivity in Pseudo-Labeling for Unsupervised Multi-Source-Free Domain Adaptation

no code implementations1 Feb 2022 Maohao Shen, Yuheng Bu, Gregory Wornell

Due to privacy, storage, and other constraints, there is a growing need for unsupervised domain adaptation techniques in machine learning that do not require access to the data used to train a collection of source models.

Unsupervised Domain Adaptation

An Exact Characterization of the Generalization Error for the Gibbs Algorithm

no code implementations NeurIPS 2021 Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell

Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

no code implementations2 Nov 2021 Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell

We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.

Transfer Learning

A Maximal Correlation Approach to Imposing Fairness in Machine Learning

no code implementations30 Dec 2020 Joshua Lee, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory Wornell, Leonid Karlinsky, Rogerio Feris

As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant.

BIG-bench Machine Learning Fairness

Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

no code implementations NeurIPS 2019 Joshua Lee, Prasanna Sattigeri, Gregory Wornell

However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples.

Transfer Learning

Co-regularized Alignment for Unsupervised Domain Adaptation

no code implementations NeurIPS 2018 Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, William T. Freeman, Gregory Wornell

Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.