Search Results for author: Gregory Wornell

Found 12 papers, 3 papers with code

Reliable Gradient-free and Likelihood-free Prompt Tuning

1 code implementation30 Apr 2023 Maohao Shen, Soumya Ghosh, Prasanna Sattigeri, Subhro Das, Yuheng Bu, Gregory Wornell

Due to privacy or commercial constraints, large pre-trained language models (PLMs) are often offered as black-box APIs.

On the Generalization Error of Meta Learning for the Gibbs Algorithm

no code implementations27 Apr 2023 Yuheng Bu, Harsha Vardhan Tetali, Gholamali Aminian, Miguel Rodrigues, Gregory Wornell

We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs algorithm.

Meta-Learning

Post-hoc Uncertainty Learning using a Dirichlet Meta-Model

1 code implementation14 Dec 2022 Maohao Shen, Yuheng Bu, Prasanna Sattigeri, Soumya Ghosh, Subhro Das, Gregory Wornell

It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures.

Image Classification Transfer Learning +1

Tighter Expected Generalization Error Bounds via Convexity of Information Measures

no code implementations24 Feb 2022 Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues

Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.

On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain Adaptation

1 code implementation1 Feb 2022 Maohao Shen, Yuheng Bu, Gregory Wornell

Due to privacy, storage, and other constraints, there is a growing need for unsupervised domain adaptation techniques in machine learning that do not require access to the data used to train a collection of source models.

Source-Free Domain Adaptation Unsupervised Domain Adaptation

An Exact Characterization of the Generalization Error for the Gibbs Algorithm

no code implementations NeurIPS 2021 Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell

Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

no code implementations2 Nov 2021 Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell

We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.

Transfer Learning

A Maximal Correlation Approach to Imposing Fairness in Machine Learning

no code implementations30 Dec 2020 Joshua Lee, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory Wornell, Leonid Karlinsky, Rogerio Feris

As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant.

BIG-bench Machine Learning Fairness

Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

no code implementations NeurIPS 2019 Joshua Lee, Prasanna Sattigeri, Gregory Wornell

However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples.

Transfer Learning

Co-regularized Alignment for Unsupervised Domain Adaptation

no code implementations NeurIPS 2018 Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, William T. Freeman, Gregory Wornell

Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.