Search Results for author: Jorge Mendez

Found 3 papers, 2 papers with code

Improving Black-box Robustness with In-Context Rewriting

1 code implementation13 Feb 2024 Kyle O'Brien, Nathan Ng, Isha Puri, Jorge Mendez, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi, Thomas Hartvigsen

Most techniques for improving OOD robustness are not applicable to settings where the model is effectively a black box, such as when the weights are frozen, retraining is costly, or the model is leveraged via an API.

News Classification

Gap Minimization for Knowledge Sharing and Transfer

no code implementations26 Jan 2022 Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton

Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.

Representation Learning Transfer Learning

Transfer Learning via Minimizing the Performance Gap Between Domains

1 code implementation NeurIPS 2019 Boyu Wang, Jorge Mendez, Mingbo Cai, Eric Eaton

We propose a new principle for transfer learning, based on a straightforward intuition: if two domains are similar to each other, the model trained on one domain should also perform well on the other domain, and vice versa.

Generalization Bounds Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.