no code implementations • 31 Dec 2022 • Lang Liu, Zaid Harchaoui
This paper revisits a fundamental problem in statistical inference from a non-asymptotic theoretical viewpoint $\unicode{x2013}$ the construction of confidence sets.
1 code implementation • 30 Dec 2022 • Krishna Pillutla, Lang Liu, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, Zaid Harchaoui
We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.
1 code implementation • 10 Dec 2022 • Ronak Mehta, Vincent Roulet, Krishna Pillutla, Lang Liu, Zaid Harchaoui
Spectral risk objectives - also called $L$-risks - allow for learning systems to interpolate between optimizing average-case performance (as in empirical risk minimization) and worst-case performance on a task.
1 code implementation • 8 Dec 2022 • Jillian Fisher, Lang Liu, Krishna Pillutla, Yejin Choi, Zaid Harchaoui
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications.
no code implementations • 30 Apr 2022 • Lang Liu, Carlos Cinelli, Zaid Harchaoui
Orthogonal statistical learning and double machine learning have emerged as general frameworks for two-stage statistical prediction in the presence of a nuisance component.
no code implementations • 4 Feb 2022 • Lang Liu, Mahdi Milani Fard, Sen Zhao
We propose Distribution Embedding Networks (DEN) for classification with small data.
1 code implementation • 31 Dec 2021 • Lang Liu, Soumik Pal, Zaid Harchaoui
We introduce an independence criterion based on entropy regularized optimal transport.
1 code implementation • 27 Jun 2021 • Lang Liu, Joseph Salmon, Zaid Harchaoui
The widespread use of machine learning algorithms calls for automatic change detection algorithms to monitor their behavior over time.
1 code implementation • NeurIPS 2021 • Lang Liu, Krishna Pillutla, Sean Welleck, Sewoong Oh, Yejin Choi, Zaid Harchaoui
The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance.
no code implementations • 1 Jan 2021 • Lang Liu, Mahdi Milani Fard, Sen Zhao
We propose Distribution Embedding Network (DEN) for meta-learning, which is designed for applications where both the distribution and the number of features could vary across tasks.
no code implementations • 17 Nov 2020 • Zaid Harchaoui, Lang Liu, Soumik Pal
We consider instead in this paper the problem where each matching is endowed with a Gibbs probability weight proportional to the exponential of the negative total cost of that matching.