Search Results for author: Zachary C. Lipton

Found 82 papers, 32 papers with code

Mixture Proportion Estimation and PU Learning: A Modern Approach

1 code implementation NeurIPS 2021 Saurabh Garg, Yifan Wu, Alex Smola, Sivaraman Balakrishnan, Zachary C. Lipton

Formally, this task is broken down into two subtasks: (i) Mixture Proportion Estimation (MPE) -- determining the fraction of positive examples in the unlabeled data; and (ii) PU-learning -- given such an estimate, learning the desired positive-versus-negative classifier.

Practical Benefits of Feature Feedback Under Distribution Shift

no code implementations14 Oct 2021 Anurag Katakkar, Weiqin Wang, Clay H. Yoo, Zachary C. Lipton, Divyansh Kaushik

In attempts to develop sample-efficient algorithms, researcher have explored myriad mechanisms for collecting and exploiting feature feedback, auxiliary annotations provided for training (but not test) instances that highlight salient evidence.

Natural Language Inference Sentiment Analysis

Efficient Online Estimation of Causal Effects by Deciding What to Observe

1 code implementation NeurIPS 2021 Shantanu Gupta, Zachary C. Lipton, David Childers

Researchers often face data fusion problems, where multiple data sources are available, each capturing a distinct subset of variables.

Dive into Deep Learning

1 code implementation21 Jun 2021 Aston Zhang, Zachary C. Lipton, Mu Li, Alexander J. Smola

This open-source book represents our attempt to make deep learning approachable, teaching readers the concepts, the context, and the code.

Correcting Exposure Bias for Link Recommendation

1 code implementation13 Jun 2021 Shantanu Gupta, Hao Wang, Zachary C. Lipton, Yuyang Wang

Link prediction methods are frequently applied in recommender systems, e. g., to suggest citations for academic papers or friends in social networks.

Link Prediction Recommendation Systems

On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study

1 code implementation ACL 2021 Divyansh Kaushik, Douwe Kiela, Zachary C. Lipton, Wen-tau Yih

In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.

Question Answering

RATT: Leveraging Unlabeled Data to Guarantee Generalization

1 code implementation1 May 2021 Saurabh Garg, Sivaraman Balakrishnan, J. Zico Kolter, Zachary C. Lipton

To assess generalization, machine learning scientists typically either (i) bound the generalization gap and then (after training) plug in the empirical risk to obtain a bound on the true risk; or (ii) validate empirically on holdout data.

Generalization Bounds

Off-Policy Risk Assessment in Contextual Bandits

no code implementations NeurIPS 2021 Audrey Huang, Liu Leqi, Zachary C. Lipton, Kamyar Azizzadenesheli

Even when unable to run experiments, practitioners can evaluate prospective policies, using previously logged data.

Multi-Armed Bandits

On the Convergence and Optimality of Policy Gradient for Markov Coherent Risk

no code implementations4 Mar 2021 Audrey Huang, Liu Leqi, Zachary C. Lipton, Kamyar Azizzadenesheli

Because optimizing the coherent risk is difficult in Markov decision processes, recent work tends to focus on the Markov coherent risk (MCR), a time-consistent surrogate.

Parametric Complexity Bounds for Approximating PDEs with Neural Networks

no code implementations NeurIPS 2021 Tanya Marwah, Zachary C. Lipton, Andrej Risteski

Recent experiments have shown that deep networks can approximate solutions to high-dimensional PDEs, seemingly escaping the curse of dimensionality.

On Proximal Policy Optimization's Heavy-tailed Gradients

no code implementations20 Feb 2021 Saurabh Garg, Joshua Zhanson, Emilio Parisotto, Adarsh Prasad, J. Zico Kolter, Zachary C. Lipton, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Pradeep Ravikumar

In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function.

Continuous Control

Evaluating Explanations: How much do explanations from the teacher aid students?

no code implementations1 Dec 2020 Danish Pruthi, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen

While many methods purport to explain predictions by highlighting salient features, what precise aims these explanations serve and how to evaluate their utility are often unstated.

Decoding and Diversity in Machine Translation

no code implementations26 Nov 2020 Nicholas Roberts, Davis Liang, Graham Neubig, Zachary C. Lipton

This makes human-level BLEU a misleading benchmark in that modern MT systems cannot approach human-level BLEU while simultaneously maintaining human-level translation diversity.

Machine Translation Translation

Rebounding Bandits for Modeling Satiation Effects

no code implementations NeurIPS 2021 Liu Leqi, Fatma Kilinc-Karzan, Zachary C. Lipton, Alan L. Montgomery

Psychological research shows that enjoyment of many goods is subject to satiation, with short-term satisfaction declining after repeated exposures to the same item.

Recommendation Systems

Fair Machine Learning Under Partial Compliance

no code implementations7 Nov 2020 Jessica Dai, Sina Fazelpour, Zachary C. Lipton

If k% of employers were to voluntarily adopt a fairness-promoting intervention, should we expect k% progress (in aggregate) towards the benefits of universal adoption, or will the dynamics of partial compliance wash out the hoped-for benefits?


Weakly- and Semi-supervised Evidence Extraction

1 code implementation Findings of the Association for Computational Linguistics 2020 Danish Pruthi, Bhuwan Dhingra, Graham Neubig, Zachary C. Lipton

For many prediction tasks, stakeholders desire not only predictions but also supporting evidence that a human can use to verify its correctness.


Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data

no code implementations EMNLP 2021 David Lowell, Brian E. Howard, Zachary C. Lipton, Byron C. Wallace

Unsupervised Data Augmentation (UDA) is a semi-supervised technique that applies a consistency loss to penalize differences between a model's predictions on (a) observed (unlabeled) examples; and (b) corresponding 'noised' examples produced via data augmentation.

Data Augmentation Text Classification +1

On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment

1 code implementation EMNLP 2020 ZiRui Wang, Zachary C. Lipton, Yulia Tsvetkov

Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages.


Explaining The Efficacy of Counterfactually Augmented Data

no code implementations ICLR 2021 Divyansh Kaushik, Amrith Setlur, Eduard Hovy, Zachary C. Lipton

In attempts to produce ML models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactual label applicable.

Domain Generalization

Extracting Structured Data from Physician-Patient Conversations By Predicting Noteworthy Utterances

no code implementations14 Jul 2020 Kundan Krishna, Amy Pavel, Benjamin Schloss, Jeffrey P. Bigham, Zachary C. Lipton

In this exploratory study, we describe a new dataset consisting of conversation transcripts, post-visit summaries, corresponding supporting evidence (in the transcript), and structured labels.

Uncertainty-Aware Lookahead Factor Models for Quantitative Investing

no code implementations7 Jul 2020 Lakshay Chauhan, John Alberg, Zachary C. Lipton

On a periodic basis, publicly traded companies report fundamentals, financial data including revenue, earnings, debt, among others.

Predicting Mortality Risk in Viral and Unspecified Pneumonia to Assist Clinicians with COVID-19 ECMO Planning

1 code implementation2 Jun 2020 Helen Zhou, Cheng Cheng, Zachary C. Lipton, George H. Chen, Jeremy C. Weiss

Finally, the PEER score is provided in the form of a nomogram for direct calculation of patient risk, and can be used to highlight at-risk patients among critical care patients eligible for ECMO.

Estimating Treatment Effects with Observed Confounders and Mediators

no code implementations26 Mar 2020 Shantanu Gupta, Zachary C. Lipton, David Childers

We show that it strictly outperforms the backdoor and frontdoor estimators and that this improvement can be unbounded.

A Unified View of Label Shift Estimation

no code implementations NeurIPS 2020 Saurabh Garg, Yifan Wu, Sivaraman Balakrishnan, Zachary C. Lipton

Our contributions include (i) consistency conditions for MLLS, which include calibration of the classifier and a confusion matrix invertibility condition that BBSE also requires; (ii) a unified framework, casting BBSE as roughly equivalent to MLLS for a particular choice of calibration method; and (iii) a decomposition of MLLS's finite-sample error into terms reflecting miscalibration and estimation error.

Causal Inference With Selectively Deconfounded Data

no code implementations25 Feb 2020 Kyra Gan, Andrew A. Li, Zachary C. Lipton, Sridhar Tayur

In this paper, we consider the benefit of incorporating a large confounded observational dataset (confounder unobserved) alongside a small deconfounded observational dataset (confounder revealed) when estimating the ATE.

Causal Inference

Algorithmic Fairness from a Non-ideal Perspective

no code implementations8 Jan 2020 Sina Fazelpour, Zachary C. Lipton

Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions.


Game Design for Eliciting Distinguishable Behavior

no code implementations NeurIPS 2019 Fan Yang, Liu Leqi, Yifan Wu, Zachary C. Lipton, Pradeep Ravikumar, William W. Cohen, Tom Mitchell

The ability to inferring latent psychological traits from human behavior is key to developing personalized human-interacting machine learning systems.

Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?

no code implementations18 Oct 2019 Simran Kaur, Jeremy Cohen, Zachary C. Lipton

For a standard convolutional neural network, optimizing over the input pixels to maximize the score of some target class will generally produce a grainy-looking version of the original image.

Adversarial Robustness

Accelerating Deep Learning by Focusing on the Biggest Losers

1 code implementation2 Oct 2019 Angela H. Jiang, Daniel L. -K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai

This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration.

Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

2 code implementations ICLR 2020 Divyansh Kaushik, Eduard Hovy, Zachary C. Lipton

While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e. g., mentions of genre), models trained on the combined data are less sensitive to this signal.

Data Augmentation Natural Language Inference +1

Learning to Deceive with Attention-Based Explanations

3 code implementations ACL 2020 Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, Zachary C. Lipton

Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing.


Entity Projection via Machine Translation for Cross-Lingual NER

1 code implementation IJCNLP 2019 Alankar Jain, Bhargavi Paranjape, Zachary C. Lipton

Although over 100 languages are supported by strong off-the-shelf machine translation systems, only a subset of them possess large annotated corpora for named entity recognition.

Cross-Lingual NER Machine Translation +3

AmazonQA: A Review-Based Question Answering Task

1 code implementation12 Aug 2019 Mansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, Zachary C. Lipton

Observing that many questions can be answered based upon the available product reviews, we propose the task of review-based QA.

Information Retrieval Question Answering +1

Estimating brain age based on a healthy population with deep learning and structural MRI

no code implementations1 Jul 2019 Xinyang Feng, Zachary C. Lipton, Jie Yang, Scott A. Small, Frank A. Provenzano

Numerous studies have established that estimated brain age, as derived from statistical models trained on healthy populations, constitutes a valuable biomarker that is predictive of cognitive decline and various neurological diseases.

Age Estimation

Learning Causal State Representations of Partially Observable Environments

no code implementations25 Jun 2019 Amy Zhang, Zachary C. Lipton, Luis Pineda, Kamyar Azizzadenesheli, Anima Anandkumar, Laurent Itti, Joelle Pineau, Tommaso Furlanello

In this paper, we propose an algorithm to approximate causal states, which are the coarsest partition of the joint history of actions and observations in partially-observable Markov decision processes (POMDP).

Causal Inference

Combating Adversarial Misspellings with Robust Word Recognition

3 code implementations ACL 2019 Danish Pruthi, Bhuwan Dhingra, Zachary C. Lipton

To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier.

Sentiment Analysis

Efficient candidate screening under multiple tests and implications for fairness

no code implementations27 May 2019 Lee Cohen, Zachary C. Lipton, Yishay Mansour

We analyze the optimal employer policy both when the employer sets a fixed number of tests per candidate and when the employer can set a dynamic policy, assigning further tests adaptively based on results from the previous tests.


Embryo staging with weakly-supervised region selection and dynamically-decoded predictions

no code implementations9 Apr 2019 Tingfung Lau, Nathan Ng, Julian Gingold, Nina Desai, Julian McAuley, Zachary C. Lipton

First, noting that in each image the embryo occupies a small subregion, we jointly train a region proposal network with the downstream classifier to isolate the embryo.

Region Proposal

Learning Robust Representations by Projecting Superficial Statistics Out

no code implementations ICLR 2019 Haohan Wang, Zexue He, Zachary C. Lipton, Eric P. Xing

We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.

Domain Generalization

What is the Effect of Importance Weighting in Deep Learning?

1 code implementation8 Dec 2018 Jonathon Byrd, Zachary C. Lipton

Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning.

Causal Inference Domain Adaptation +1

Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study

no code implementations EMNLP 2018 Aditya Siddhant, Zachary C. Lipton

This paper provides a large scale empirical study of deep active learning, addressing multiple tasks and, for each, multiple datasets, multiple models, and a full suite of acquisition functions.

Active Learning

Learning Noise-Invariant Representations for Robust Speech Recognition

no code implementations17 Jul 2018 Davis Liang, Zhiheng Huang, Zachary C. Lipton

Despite rapid advances in speech recognition, current models remain brittle to superficial perturbations to their inputs.

Data Augmentation Representation Learning +1

Practical Obstacles to Deploying Active Learning

no code implementations IJCNLP 2019 David Lowell, Zachary C. Lipton, Byron C. Wallace

Active learning (AL) is a widely-used training strategy for maximizing predictive performance subject to a fixed annotation budget.

Active Learning Text Classification

Troubling Trends in Machine Learning Scholarship

no code implementations9 Jul 2018 Zachary C. Lipton, Jacob Steinhardt

Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms.

Surprising Negative Results for Generative Adversarial Tree Search

3 code implementations ICLR 2019 Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C. Lipton, Animashree Anandkumar

We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning.

Atari Games

Born Again Neural Networks

1 code implementation ICML 2018 Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar

Knowledge distillation (KD) consists of transferring knowledge from one machine learning model (the teacher}) to another (the student).

Knowledge Distillation

Correction by Projection: Denoising Images with Generative Adversarial Networks

no code implementations12 Mar 2018 Subarna Tripathi, Zachary C. Lipton, Truong Q. Nguyen

In this paper, we propose to denoise corrupted images by finding the nearest point on the GAN manifold, recovering latent vectors by minimizing distances in image space.


Active Learning with Partial Feedback

1 code implementation ICLR 2019 Peiyun Hu, Zachary C. Lipton, Anima Anandkumar, Deva Ramanan

While many active learning papers assume that the learner can simply ask for a label and receive it, real annotation often presents a mismatch between the form of a label (say, one among many classes), and the form of an annotation (typically yes/no binary feedback).

Active Learning

Detecting and Correcting for Label Shift with Black Box Predictors

1 code implementation ICML 2018 Zachary C. Lipton, Yu-Xiang Wang, Alex Smola

Faced with distribution shift between training and test set, we wish to detect and quantify the shift, and to correct our classifiers without test set labels.

Medical Diagnosis

Learning From Noisy Singly-labeled Data

1 code implementation ICLR 2018 Ashish Khetan, Zachary C. Lipton, Anima Anandkumar

We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data.

The Doctor Just Won't Accept That!

no code implementations20 Nov 2017 Zachary C. Lipton

For the field of interpretable machine learning to advance, we must ask the following questions: What precisely won't various stakeholders accept?

Interpretable Machine Learning

Does mitigating ML's impact disparity require treatment disparity?

no code implementations NeurIPS 2018 Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley

Following related work in law and policy, two notions of disparity have come to shape the study of fairness in algorithmic decision-making.

Decision Making Fairness

Improving Factor-Based Quantitative Investing by Forecasting Company Fundamentals

no code implementations13 Nov 2017 John Alberg, Zachary C. Lipton

Academic research has identified some factors, i. e. computed features of the reported data, that are known through retrospective analysis to outperform the market average.

Estimating Reactions and Recommending Products with Generative Models of Reviews

no code implementations IJCNLP 2017 Jianmo Ni, Zachary C. Lipton, Sharad Vikram, Julian McAuley

Natural language approaches that model information like product reviews have proved to be incredibly useful in improving the performance of such methods, as reviews provide valuable auxiliary information that can be used to better estimate latent user preferences and item properties.

Collaborative Filtering Language Modelling +2

Tensor Regression Networks

no code implementations26 Jul 2017 Jean Kossaifi, Zachary C. Lipton, Arinbjorn Kolbeinsson, Aran Khanna, Tommaso Furlanello, Anima Anandkumar

First, we introduce Tensor Contraction Layers (TCLs) that reduce the dimensionality of their input while preserving their multilinear structure using tensor contraction.

Deep Active Learning for Named Entity Recognition

1 code implementation WS 2017 Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, Animashree Anandkumar

In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning.

Active Learning Named Entity Recognition +1

Tensor Contraction Layers for Parsimonious Deep Nets

no code implementations1 Jun 2017 Jean Kossaifi, Aran Khanna, Zachary C. Lipton, Tommaso Furlanello, Anima Anandkumar

Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers.

Model Compression

Dance Dance Convolution

1 code implementation ICML 2017 Chris Donahue, Zachary C. Lipton, Julian McAuley

For the step placement task, we combine recurrent and convolutional neural networks to ingest spectrograms of low-level audio features to predict steps, conditioned on chart difficulty.

Predicting Surgery Duration with Neural Heteroscedastic Regression

no code implementations17 Feb 2017 Nathan Ng, Rodney A Gabriel, Julian McAuley, Charles Elkan, Zachary C. Lipton

Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking.

Precise Recovery of Latent Vectors from Generative Adversarial Networks

1 code implementation15 Feb 2017 Zachary C. Lipton, Subarna Tripathi

Generative adversarial networks (GANs) transform latent vectors into visually plausible images.

A User Simulator for Task-Completion Dialogues

10 code implementations17 Dec 2016 Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, Yun-Nung Chen

Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator.

Task-Oriented Dialogue Systems

Context Matters: Refining Object Detection in Video with Recurrent Neural Networks

no code implementations15 Jul 2016 Subarna Tripathi, Zachary C. Lipton, Serge Belongie, Truong Nguyen

Then we train a recurrent neural network that takes as input sequences of pseudo-labeled frames and optimizes an objective that encourages both accuracy on the target frame and consistency across consecutive frames.

Object Detection

The Mythos of Model Interpretability

1 code implementation10 Jun 2016 Zachary C. Lipton

First, we examine the motivations underlying interest in interpretability, finding them to be diverse and occasionally discordant.

Stuck in a What? Adventures in Weight Space

no code implementations23 Feb 2016 Zachary C. Lipton

As neural networks are typically over-complete, it's easy to show the existence of vast continuous regions through weight space with equal loss.

Learning to Diagnose with LSTM Recurrent Neural Networks

no code implementations11 Nov 2015 Zachary C. Lipton, David C. Kale, Charles Elkan, Randall Wetzel

We present the first study to empirically evaluate the ability of LSTMs to recognize patterns in multivariate time series of clinical measurements.

Time Series

Generative Concatenative Nets Jointly Learn to Write and Classify Reviews

1 code implementation11 Nov 2015 Zachary C. Lipton, Sharad Vikram, Julian McAuley

A recommender system's basic task is to estimate how users will respond to unseen items.

Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks

no code implementations26 Oct 2015 Zachary C. Lipton, David C. Kale, Randall C. Wetzel

We present a novel application of LSTM recurrent neural networks to multilabel classification of diagnoses given variable-length time series of clinical measurements.

Classification General Classification +1

A Critical Review of Recurrent Neural Networks for Sequence Learning

2 code implementations29 May 2015 Zachary C. Lipton, John Berkowitz, Charles Elkan

Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes.

Handwriting Recognition Image Captioning +5

Efficient Elastic Net Regularization for Sparse Linear Models

no code implementations24 May 2015 Zachary C. Lipton, Charles Elkan

This paper provides closed-form updates for the popular squared norm $\ell^2_2$ and elastic net regularizers.

Differential Privacy and Machine Learning: a Survey and Review

no code implementations24 Dec 2014 Zhanglong Ji, Zachary C. Lipton, Charles Elkan

The objective of machine learning is to extract useful information from data, while privacy is preserved by concealing information.

Cannot find the paper you are looking for? You can Submit a new open access paper.