1 code implementation • 13 Feb 2024 • Kyle O'Brien, Nathan Ng, Isha Puri, Jorge Mendez, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi, Thomas Hartvigsen
Most techniques for improving OOD robustness are not applicable to settings where the model is effectively a black box, such as when the weights are frozen, retraining is costly, or the model is leveraged via an API.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
1 code implementation • NeurIPS 2019 • Boyu Wang, Jorge Mendez, Mingbo Cai, Eric Eaton
We propose a new principle for transfer learning, based on a straightforward intuition: if two domains are similar to each other, the model trained on one domain should also perform well on the other domain, and vice versa.