no code implementations • 16 Oct 2022 • Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder
This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.
1 code implementation • 28 Feb 2022 • Parikshit Gopalan, Nina Narodytska, Omer Reingold, Vatsal Sharan, Udi Wieder
Estimating the Kullback-Leibler (KL) divergence between two distributions given samples from them is well-studied in machine learning and information theory.
no code implementations • 11 Sep 2021 • Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, Udi Wieder
We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.
no code implementations • 10 Mar 2021 • Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder
We significantly strengthen previous work that use the MaxEntropy approach, that define the importance weights based on a distribution $Q$ closest to $P$, that looks the same as $R$ on every set $C \in \mathcal{C}$, where $\mathcal{C}$ may be a huge collection of sets.
1 code implementation • NeurIPS 2019 • Parikshit Gopalan, Vatsal Sharan, Udi Wieder
We consider the problem of detecting anomalies in a large dataset.
no code implementations • NeurIPS 2018 • Vatsal Sharan, Parikshit Gopalan, Udi Wieder
We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores.