1 code implementation • NeurIPS 2021 • Saurabh Garg, Yifan Wu, Alex Smola, Sivaraman Balakrishnan, Zachary Chase Lipton
Formally, this task is broken down into two subtasks: (i) Mixture Proportion Estimation (MPE)---determining the fraction of positive examples in the unlabeled data; and (ii) PU-learning---given such an estimate, learning the desired positive-versus-negative classifier.
no code implementations • 27 Apr 2019 • Mohammad Taha Bahadori, Zachary Chase Lipton
We postulate that fine temporal detail, e. g., whether a series of blood tests are completed at once or in rapid succession should not alter predictions based on this data.
1 code implementation • 8 Feb 2014 • Zachary Chase Lipton, Charles Elkan, Balakrishnan Narayanaswamy
As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive.