2 code implementations • 11 Apr 2024 • Rishabh Ranjan, Saurabh Garg, Mrigank Raman, Carlos Guestrin, Zachary Chase Lipton
This phenomenon is especially prominent in high-noise settings.
no code implementations • NeurIPS 2023 • Saurabh Garg, Amrith Setlur, Zachary Chase Lipton, Sivaraman Balakrishnan, Virginia Smith, aditi raghunathan
Self-training and contrastive learning have emerged as leading techniques for incorporating unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it is absent (semi-supervised learning).
no code implementations • 21 Sep 2022 • Audrey Huang, Liu Leqi, Zachary Chase Lipton, Kamyar Azizzadenesheli
To mitigate these problems, we incorporate model-based estimation to develop the first doubly robust (DR) estimator for the CDF of returns in MDPs.
1 code implementation • NeurIPS 2021 • Saurabh Garg, Yifan Wu, Alex Smola, Sivaraman Balakrishnan, Zachary Chase Lipton
Formally, this task is broken down into two subtasks: (i) Mixture Proportion Estimation (MPE)---determining the fraction of positive examples in the unlabeled data; and (ii) PU-learning---given such an estimate, learning the desired positive-versus-negative classifier.
no code implementations • 27 Apr 2019 • Mohammad Taha Bahadori, Zachary Chase Lipton
We postulate that fine temporal detail, e. g., whether a series of blood tests are completed at once or in rapid succession should not alter predictions based on this data.
1 code implementation • 8 Feb 2014 • Zachary Chase Lipton, Charles Elkan, Balakrishnan Narayanaswamy
As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive.