Search Results for author: Lawrence Carin

Found 224 papers, 60 papers with code

What Makes Good In-Context Examples for GPT-3?

no code implementations DeeLIO (ACL) 2022 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3’s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt.

Natural Language Understanding Open-Domain Question Answering +2

An Embedding Model for Estimating Legislative Preferences from the Frequency and Sentiment of Tweets

no code implementations EMNLP 2020 Gregory Spell, Brian Guay, Sunshine Hillygus, Lawrence Carin

Legislator preferences are typically represented as measures of general ideology estimated from roll call votes on legislation, potentially masking important nuances in legislators{'} political attitudes.

Open World Classification with Adaptive Negative Samples

no code implementations9 Mar 2023 Ke Bai, Guoyin Wang, Jiwei Li, Sunghyun Park, Sungjin Lee, Puyang Xu, Ricardo Henao, Lawrence Carin

Open world classification is a task in natural language processing with key practical relevance and impact.

Classification

Pushing the Efficiency Limit Using Structured Sparse Convolutions

no code implementations23 Oct 2022 Vinay Kumar Verma, Nikhil Mehta, Shijing Si, Ricardo Henao, Lawrence Carin

Weight pruning is among the most popular approaches for compressing deep convolutional neural networks.

Pseudo-OOD training for robust language models

no code implementations17 Oct 2022 Dhanasekar Sundararaman, Nikhil Mehta, Lawrence Carin

The model is fine-tuned by introducing a new regularization loss that separates the embeddings of IND and OOD data, which leads to significant gains on the OOD prediction task during testing.

Out of Distribution (OOD) Detection

Collaborative Anomaly Detection

no code implementations20 Sep 2022 Ke Bai, Aonan Zhang, Zhizhong Li, Ricardo Heano, Chong Wang, Lawrence Carin

In recommendation systems, items are likely to be exposed to various users and we would like to learn about the familiarity of a new user with an existing item.

Anomaly Detection Density Estimation +1

Improving Downstream Task Performance by Treating Numbers as Entities

no code implementations7 May 2022 Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Liyan Xu, Lawrence Carin

Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed.

Classification Question Answering

elBERto: Self-supervised Commonsense Learning for Question Answering

no code implementations17 Mar 2022 Xunlin Zhan, Yuan Li, Xiao Dong, Xiaodan Liang, Zhiting Hu, Lawrence Carin

Commonsense question answering requires reasoning about everyday situations and causes and effects implicit in context.

Question Answering Representation Learning +1

Capturing Actionable Dynamics with Structured Latent Ordinary Differential Equations

1 code implementation25 Feb 2022 Paidamoyo Chapfuwa, Sherri Rose, Lawrence Carin, Edward Meeds, Ricardo Henao

Understanding the effects of these system inputs on system outputs is crucial to have any meaningful model of a dynamical system.

Time Series Analysis

Explainable multiple abnormality classification of chest CT volumes

no code implementations24 Nov 2021 Rachel Lea Draelos, Lawrence Carin

We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images, in which a model must indicate the regions used to predict each abnormality.

Classification Multiple Instance Learning +1

Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping

no code implementations4 Nov 2021 Junya Chen, Sijia Wang, Lawrence Carin, Chenyang Tao

Distributed learning has become an integral tool for scaling up machine learning and addressing the growing need for data privacy.

Distributed Optimization

Variational Inference with Holder Bounds

no code implementations4 Nov 2021 Junya Chen, Danni Lu, Zidi Xiu, Ke Bai, Lawrence Carin, Chenyang Tao

In this work, we present a careful analysis of the thermodynamic variational objective (TVO), bridging the gap between existing variational objectives and shedding new insights to advance the field.

Variational Inference

Hölder Bounds for Sensitivity Analysis in Causal Reasoning

no code implementations9 Jul 2021 Serge Assaad, Shuxi Zeng, Henry Pfister, Fan Li, Lawrence Carin

We examine interval estimation of the effect of a treatment T on an outcome Y given the existence of an unobserved confounder U.

Gradient Importance Learning for Incomplete Observations

1 code implementation ICLR 2022 Qitong Gao, Dong Wang, Joshua D. Amason, Siyang Yuan, Chenyang Tao, Ricardo Henao, Majda Hadziahmetovic, Lawrence Carin, Miroslav Pajic

Though recent works have developed methods that can generate estimates (or imputations) of the missing entries in a dataset to facilitate downstream analysis, most depend on assumptions that may not align with real-world applications and could suffer from poor performance in subsequent tasks such as classification.

Imputation Time Series Analysis

Tight Mutual Information Estimation With Contrastive Fenchel-Legendre Optimization

1 code implementation2 Jul 2021 Qing Guo, Junya Chen, Dong Wang, Yuewei Yang, Xinwei Deng, Lawrence Carin, Fan Li, Jing Huang, Chenyang Tao

Successful applications of InfoNCE and its variants have popularized the use of contrastive variational mutual information (MI) estimators in machine learning.

Mutual Information Estimation

SpanPredict: Extraction of Predictive Document Spans with Neural Attention

no code implementations NAACL 2021 Vivek Subramanian, Matthew Engelhard, Sam Berchuck, Liqun Chen, Ricardo Henao, Lawrence Carin

In many natural language processing applications, identifying predictive text can be as important as the predictions themselves.

Towards Fair Federated Learning with Zero-Shot Data Augmentation

no code implementations27 Apr 2021 Weituo Hao, Mostafa El-Khamy, Jungwon Lee, Jianyi Zhang, Kevin J Liang, Changyou Chen, Lawrence Carin

Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.

Data Augmentation Fairness +1

Malignancy Prediction and Lesion Identification from Clinical Dermatological Images

no code implementations2 Apr 2021 Meng Xia, Meenal K. Kheterpal, Samantha C. Wong, Christine Park, William Ratliff, Lawrence Carin, Ricardo Henao

We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images, which can be indistinctly acquired via smartphone or dermoscopy capture.

Efficient Feature Transformations for Discriminative and Generative Continual Learning

1 code implementation CVPR 2021 Vinay Kumar Verma, Kevin J Liang, Nikhil Mehta, Piyush Rai, Lawrence Carin

However, the growth in the number of additional parameters of many of these types of methods can be computationally expensive at larger scales, at times prohibitively so.

Continual Learning

Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning

1 code implementation ICLR 2021 Siyang Yuan, Pengyu Cheng, Ruiyi Zhang, Weituo Hao, Zhe Gan, Lawrence Carin

Voice style transfer, also called voice conversion, seeks to modify one speaker's voice to generate speech as if it came from another (target) speaker.

Representation Learning Style Transfer +1

FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders

no code implementations ICLR 2021 Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, Lawrence Carin

Pretrained text encoders, such as BERT, have been applied increasingly in various natural language processing (NLP) tasks, and have recently demonstrated significant performance gains.

Contrastive Learning Fairness

Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot Learning

no code implementations23 Feb 2021 Vinay Kumar Verma, Kevin Liang, Nikhil Mehta, Lawrence Carin

Zero-shot learning (ZSL) has been shown to be a promising approach to generalizing a model to categories unseen during training by leveraging class attributes, but challenges still remain.

Generalized Zero-Shot Learning Meta-Learning

FLOP: Federated Learning on Medical Datasets using Partial Networks

no code implementations10 Feb 2021 Qian Yang, Jianyi Zhang, Weituo Hao, Gregory Spell, Lawrence Carin

While different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19, the data itself is still scarce due to patient privacy concerns.

Federated Learning

What Makes Good In-Context Examples for GPT-$3$?

3 code implementations17 Jan 2021 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt.

Few-Shot Learning Natural Language Understanding +3

Reinforcement Learning for Flexibility Design Problems

no code implementations2 Jan 2021 Yehua Wei, Lei Zhang, Ruiyi Zhang, Shijing Si, Hao Zhang, Lawrence Carin

Flexibility design problems are a class of problems that appear in strategic decision-making across industries, where the objective is to design a ($e. g.$, manufacturing) network that affords flexibility and adaptivity.

Decision Making reinforcement-learning +1

Towards Robust and Efficient Contrastive Textual Representation Learning

no code implementations1 Jan 2021 Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin

There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence.

Contrastive Learning Representation Learning

Wasserstein Contrastive Representation Distillation

no code implementations CVPR 2021 Liqun Chen, Dong Wang, Zhe Gan, Jingjing Liu, Ricardo Henao, Lawrence Carin

The primary goal of knowledge distillation (KD) is to encapsulate the information of a model learned from a teacher network into a student network, with the latter being more compact than the former.

Contrastive Learning Knowledge Distillation +2

Learning Graphons via Structured Gromov-Wasserstein Barycenters

1 code implementation10 Dec 2020 Hongteng Xu, Dixin Luo, Lawrence Carin, Hongyuan Zha

Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs.

Reconsidering Generative Objectives For Counterfactual Reasoning

1 code implementation NeurIPS 2020 Danni Lu, Chenyang Tao, Junya Chen, Fan Li, Feng Guo, Lawrence Carin

As a step towards more flexible, scalable and accurate ITE estimation, we present a novel generative Bayesian estimation framework that integrates representation learning, adversarial matching and causal estimation.

Causal Inference Representation Learning

Calibrating CNNs for Lifelong Learning

no code implementations NeurIPS 2020 Pravendra Singh, Vinay Kumar Verma, Pratik Mazumder, Lawrence Carin, Piyush Rai

Further, our approach does not require storing data samples from the old tasks, which is done by many replay based methods.

Continual Learning

Supercharging Imbalanced Data Learning With Energy-based Contrastive Representation Transfer

1 code implementation NeurIPS 2021 Zidi Xiu, Junya Chen, Ricardo Henao, Benjamin Goldstein, Lawrence Carin, Chenyang Tao

Dealing with severe class imbalance poses a major challenge for real-world applications, especially when the accurate classification and generalization of minority classes is of primary interest.

Inductive Bias Transfer Learning

Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks

1 code implementation17 Nov 2020 Rachel Lea Draelos, Lawrence Carin

Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations.

General Classification Image Classification

Semantic Matching for Sequence-to-Sequence Learning

no code implementations Findings of the Association for Computational Linguistics 2020 Ruiyi Zhang, Changyou Chen, Xinyuan Zhang, Ke Bai, Lawrence Carin

In sequence-to-sequence models, classical optimal transport (OT) can be applied to semantically match generated sentences with target sentences.

Counterfactual Representation Learning with Balancing Weights

no code implementations23 Oct 2020 Serge Assaad, Shuxi Zeng, Chenyang Tao, Shounak Datta, Nikhil Mehta, Ricardo Henao, Fan Li, Lawrence Carin

A key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.

Causal Inference Representation Learning

Double Robust Representation Learning for Counterfactual Prediction

1 code implementation15 Oct 2020 Shuxi Zeng, Serge Assaad, Chenyang Tao, Shounak Datta, Lawrence Carin, Fan Li

Causal inference, or counterfactual prediction, is central to decision making in healthcare, policy and social sciences.

Causal Inference Decision Making +1

RetiNerveNet: Using Recursive Deep Learning to Estimate Pointwise 24-2 Visual Field Data based on Retinal Structure

no code implementations15 Oct 2020 Shounak Datta, Eduardo B. Mariottoni, David Dov, Alessandro A. Jammal, Lawrence Carin, Felipe A. Medeiros

Due to the SAP test's innate difficulty and its high test-retest variability, we propose the RetiNerveNet, a deep convolutional recursive neural network for obtaining estimates of the SAP visual field.

Background Adaptive Faster R-CNN for Semi-Supervised Convolutional Object Detection of Threats in X-Ray Images

no code implementations2 Oct 2020 John B. Sigman, Gregory P. Spell, Kevin J Liang, Lawrence Carin

The data sources described earlier make two "domains": a hand-collected data domain of images with threats, and a real-world domain of images assumed without threats.

Domain Adaptation object-detection +1

Weakly supervised cross-domain alignment with optimal transport

no code implementations14 Aug 2020 Siyang Yuan, Ke Bai, Liqun Chen, Yizhe Zhang, Chenyang Tao, Chunyuan Li, Guoyin Wang, Ricardo Henao, Lawrence Carin

Cross-domain alignment between image objects and text sequences is key to many visual-language tasks, and it poses a fundamental challenge to both computer vision and natural language processing.

WAFFLe: Weight Anonymized Factorization for Federated Learning

no code implementations13 Aug 2020 Weituo Hao, Nikhil Mehta, Kevin J Liang, Pengyu Cheng, Mostafa El-Khamy, Lawrence Carin

Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.

Fairness Federated Learning

Bridging Maximum Likelihood and Adversarial Learning via $α$-Divergence

no code implementations13 Jul 2020 Miaoyun Zhao, Yulai Cong, Shuyang Dai, Lawrence Carin

Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary.

Graph Optimal Transport for Cross-Domain Alignment

1 code implementation ICML 2020 Liqun Chen, Zhe Gan, Yu Cheng, Linjie Li, Lawrence Carin, Jingjing Liu

In GOT, cross-domain alignment is formulated as a graph matching problem, by representing entities into a dynamically-constructed graph.

Graph Matching Image Captioning +7

Students Need More Attention: BERT-based AttentionModel for Small Data with Application to AutomaticPatient Message Triage

1 code implementation22 Jun 2020 Shijing Si, Rui Wang, Jedrek Wosik, Hao Zhang, David Dov, Guoyin Wang, Ricardo Henao, Lawrence Carin

Small and imbalanced datasets commonly seen in healthcare represent a challenge when training classifiers based on deep learning models.

GO Hessian for Expectation-Based Objectives

1 code implementation16 Jun 2020 Yulai Cong, Miaoyun Zhao, Jianqiao Li, Junya Chen, Lawrence Carin

An unbiased low-variance gradient estimator, termed GO gradient, was proposed recently for expectation-based objectives $\mathbb{E}_{q_{\boldsymbol{\gamma}}(\boldsymbol{y})} [f(\boldsymbol{y})]$, where the random variable (RV) $\boldsymbol{y}$ may be drawn from a stochastic computation graph with continuous (non-reparameterizable) internal nodes and continuous/discrete leaves.

GAN Memory with No Forgetting

1 code implementation NeurIPS 2020 Yulai Cong, Miaoyun Zhao, Jianqiao Li, Sijia Wang, Lawrence Carin

As a fundamental issue in lifelong learning, catastrophic forgetting is directly caused by inaccessible historical data; accordingly, if the data (information) were memorized perfectly, no forgetting should be expected.

Towards Understanding Fast Adversarial Training

no code implementations4 Jun 2020 Bai Li, Shiqi Wang, Suman Jana, Lawrence Carin

Current neural-network-based classifiers are susceptible to adversarial examples.

Hierarchical Optimal Transport for Robust Multi-View Learning

no code implementations4 Jun 2020 Dixin Luo, Hongteng Xu, Lawrence Carin

Traditional multi-view learning methods often rely on two assumptions: ($i$) the samples in different views are well-aligned, and ($ii$) their representations in latent space obey the same distribution.

MULTI-VIEW LEARNING

Improving Disentangled Text Representation Learning with Information-Theoretic Guidance

no code implementations ACL 2020 Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, Lawrence Carin

Learning disentangled representations of natural language is essential for many NLP tasks, e. g., conditional text generation, style transfer, personalized dialogue systems, etc.

Conditional Text Generation Representation Learning +2

Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of Geometry and Segmentation of Annotations

no code implementations8 May 2020 John McManigle, Raquel Bartz, Lawrence Carin

A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation (similarity transform parameters) of the chest and segmentation of radiographic annotations.

Classification General Classification +3

Reward Constrained Interactive Recommendation with Natural Language Feedback

no code implementations4 May 2020 Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen, Lawrence Carin

Text-based interactive recommendation provides richer user feedback and has demonstrated advantages over traditional interactive recommender systems.

Recommendation Systems reinforcement-learning +2

RaCT: Toward Amortized Ranking-Critical Training For Collaborative Filtering

1 code implementation ICLR 2020 Sam Lobel*, Chunyuan Li*, Jianfeng Gao, Lawrence Carin

We investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to more directly maximize ranking-based objective functions.

Collaborative Filtering Learning-To-Rank +2

APo-VAE: Text Generation in Hyperbolic Space

no code implementations NAACL 2021 Shuyang Dai, Zhe Gan, Yu Cheng, Chenyang Tao, Lawrence Carin, Jingjing Liu

In this paper, we investigate text generation in a hyperbolic latent space to learn continuous hierarchical representations.

Language Modelling Response Generation +1

Transferable Perturbations of Deep Feature Distributions

no code implementations ICLR 2020 Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen

Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.

Adversarial Attack

Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors

no code implementations21 Apr 2020 Nikhil Mehta, Kevin J Liang, Vinay K Verma, Lawrence Carin

Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings, where data from previous tasks are unavailable.

Continual Learning Transfer Learning

Towards Practical Lottery Ticket Hypothesis for Adversarial Training

1 code implementation6 Mar 2020 Bai Li, Shiqi Wang, Yunhan Jia, Yantao Lu, Zhenyu Zhong, Lawrence Carin, Suman Jana

Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.

Survival Cluster Analysis

1 code implementation29 Feb 2020 Paidamoyo Chapfuwa, Chunyuan Li, Nikhil Mehta, Lawrence Carin, Ricardo Henao

As a result, there is an unmet need in survival analysis for identifying subpopulations with distinct risk profiles, while jointly accounting for accurate individualized time-to-event predictions.

Survival Analysis

On Leveraging Pretrained GANs for Generation with Limited Data

1 code implementation ICML 2020 Miaoyun Zhao, Yulai Cong, Lawrence Carin

Demonstrated by natural-image generation, we reveal that low-level filters (those close to observations) of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data.

Image Generation Transfer Learning

Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training

1 code implementation CVPR 2020 Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao

By training on a large amount of image-text-action triplets in a self-supervised learning manner, the pre-trained model provides generic representations of visual environments and language instructions.

Navigate Self-Supervised Learning +2

Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale Chest Computed Tomography Volumes

no code implementations12 Feb 2020 Rachel Lea Draelos, David Dov, Maciej A. Mazurowski, Joseph Y. Lo, Ricardo Henao, Geoffrey D. Rubin, Lawrence Carin

This model reached a classification performance of AUROC greater than 0. 90 for 18 abnormalities, with an average AUROC of 0. 773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data.

BIG-bench Machine Learning Computed Tomography (CT) +1

Object Detection as a Positive-Unlabeled Problem

no code implementations11 Feb 2020 Yuewei Yang, Kevin J Liang, Lawrence Carin

These missing annotations can be problematic, as the standard cross-entropy loss employed to train object detection models treats classification as a positive-negative (PN) problem: unlabeled regions are implicitly assumed to be background.

General Classification object-detection +1

Toward Automatic Threat Recognition for Airport X-ray Baggage Screening with Deep Convolutional Object Detection

no code implementations13 Dec 2019 Kevin J Liang, John B. Sigman, Gregory P. Spell, Dan Strellis, William Chang, Felix Liu, Tejas Mehta, Lawrence Carin

We show performance of our models on held-out evaluation sets, analyze several design parameters, and demonstrate the potential of such systems for automated detection of threats that can be found in airports.

object-detection Object Detection

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

2 code implementations CVPR 2020 Yantao Lu, Yunhan Jia, Jian-Yu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they remain adversarial even against other models.

Adversarial Attack Image Classification +3

Graph-Driven Generative Models for Heterogeneous Multi-Task Learning

no code implementations20 Nov 2019 Wenlin Wang, Hongteng Xu, Zhe Gan, Bai Li, Guoyin Wang, Liqun Chen, Qian Yang, Wenqi Wang, Lawrence Carin

We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework.

Multi-Task Learning Type prediction

Learning to Recommend from Sparse Data via Generative User Feedback

no code implementations ICLR 2020 Wenlin Wang, Hongteng Xu, Ruiyi Zhang, Wenqi Wang, Piyush Rai, Lawrence Carin

To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback.

Collaborative Filtering Recommendation Systems

Zero-Shot Recognition via Optimal Transport

no code implementations20 Oct 2019 Wenlin Wang, Hongteng Xu, Guoyin Wang, Wenqi Wang, Lawrence Carin

{Specifically, we build a conditional generative model to generate features from seen-class attributes, and establish an optimal transport between the distribution of the generated features and that of the real features.}

Generalized Zero-Shot Learning

Kernel-Based Approaches for Sequence Modeling: Connections to Neural Methods

1 code implementation NeurIPS 2019 Kevin J Liang, Guoyin Wang, Yitong Li, Ricardo Henao, Lawrence Carin

We investigate time-dependent data analysis from the perspective of recurrent kernel machines, from which models with hidden units and gated memory cells arise naturally.

Straight-Through Estimator as Projected Wasserstein Gradient Flow

no code implementations5 Oct 2019 Pengyu Cheng, Chang Liu, Chunyuan Li, Dinghan Shen, Ricardo Henao, Lawrence Carin

The Straight-Through (ST) estimator is a widely used technique for back-propagating gradients through discrete random variables.

Fused Gromov-Wasserstein Alignment for Hawkes Processes

no code implementations4 Oct 2019 Dixin Luo, Hongteng Xu, Lawrence Carin

Accordingly, the learned optimal transport reflects the correspondence between the event types of these two Hawkes processes.

Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation

no code implementations11 Sep 2019 Shuyang Dai, Yu Cheng, Yizhe Zhang, Zhe Gan, Jingjing Liu, Lawrence Carin

Recent unsupervised approaches to domain adaptation primarily focus on minimizing the gap between the source and the target domains through refining the feature generator, in order to learn a better alignment between the two domains.

Unsupervised Domain Adaptation

LMVP: Video Predictor with Leaked Motion Information

no code implementations24 Jun 2019 Dong Wang, Yitong Li, Wei Cao, Liqun Chen, Qi Wei, Lawrence Carin

We propose a Leaked Motion Video Predictor (LMVP) to predict future frames by capturing the spatial and temporal dependencies from given inputs.

Adversarial Self-Paced Learning for Mixture Models of Hawkes Processes

no code implementations20 Jun 2019 Dixin Luo, Hongteng Xu, Lawrence Carin

Instead of learning a mixture model directly from a set of event sequences drawn from different Hawkes processes, the proposed method learns the target model iteratively, which generates "easy" sequences and uses them in an adversarial and self-paced manner.

Data Augmentation

Learning Compressed Sentence Representations for On-Device Text Processing

1 code implementation ACL 2019 Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin

Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.

Retrieval Sentence Embeddings

Interpretable ICD Code Embeddings with Self- and Mutual-Attention Mechanisms

no code implementations13 Jun 2019 Dixin Luo, Hongteng Xu, Lawrence Carin

The proposed method achieves clinically-interpretable embeddings of ICD codes, and outperforms state-of-the-art embedding methods in procedure recommendation.

Towards Amortized Ranking-Critical Training for Collaborative Filtering

1 code implementation10 Jun 2019 Sam Lobel, Chunyuan Li, Jianfeng Gao, Lawrence Carin

In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest.

Collaborative Filtering Learning-To-Rank +1

Adaptation Across Extreme Variations using Unlabeled Domain Bridges

no code implementations5 Jun 2019 Shuyang Dai, Kihyuk Sohn, Yi-Hsuan Tsai, Lawrence Carin, Manmohan Chandraker

We tackle an unsupervised domain adaptation problem for which the domain discrepancy between labeled source and unlabeled target domains is large, due to many factors of inter and intra-domain variation.

Object Recognition Semantic Segmentation +1

Syntax-Infused Variational Autoencoder for Text Generation

no code implementations ACL 2019 Xinyuan Zhang, Yi Yang, Siyang Yuan, Dinghan Shen, Lawrence Carin

We present a syntax-infused variational autoencoder (SIVAE), that integrates sentences with their syntactic trees to improve the grammar of generated sentences.

Text Generation

Survival Function Matching for Calibrated Time-to-Event Predictions

1 code implementation21 May 2019 Paidamoyo Chapfuwa, Chenyang Tao, Lawrence Carin, Ricardo Henao

We present a survival function estimator for probabilistic predictions in time-to-event models, based on a neural network model for draws from the distribution of event times, without explicit assumptions on the form of the distribution.

Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching

1 code implementation NeurIPS 2019 Hongteng Xu, Dixin Luo, Lawrence Carin

Using this concept, we extend our method to multi-graph partitioning and matching by learning a Gromov-Wasserstein barycenter graph for multiple observed graphs; the barycenter graph plays the role of the disconnected graph, and since it is learned, so is the clustering.

Graph Matching graph partitioning

On Norm-Agnostic Robustness of Adversarial Training

no code implementations15 May 2019 Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

Adversarial examples are carefully perturbed in-puts for fooling machine learning models.

BIG-bench Machine Learning

Stochastic Blockmodels meet Graph Neural Networks

no code implementations Proceedings of the 36th International Conference on Machine Learning 2019 Nikhil Mehta, Lawrence Carin, Piyush Rai

Although we develop this framework for a particular type of SBM, namely the \emph{overlapping} stochastic blockmodel, the proposed framework can be adapted readily for other types of SBMs.

Link Prediction

Second-Order Adversarial Attack and Certifiable Robustness

no code implementations ICLR 2019 Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

In this paper, we propose a powerful second-order attack method that reduces the accuracy of the defense model by Madry et al. (2017).

Adversarial Attack

Thyroid Cancer Malignancy Prediction From Whole Slide Cytopathology Images

no code implementations29 Mar 2019 David Dov, Shahar Kovalsky, Jonathan Cohen, Danielle Range, Ricardo Henao, Lawrence Carin

We consider preoperative prediction of thyroid cancer based on ultra-high-resolution whole-slide cytopathology images.

Multiple Instance Learning

Scalable Thompson Sampling via Optimal Transport

no code implementations19 Feb 2019 Ruiyi Zhang, Zheng Wen, Changyou Chen, Lawrence Carin

Thompson sampling (TS) is a class of algorithms for sequential decision-making, which requires maintaining a posterior distribution over a model.

Decision Making Thompson Sampling

Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models

no code implementations ACL 2019 Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, Lawrence Carin

Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation with latent variables.

Text Generation

Gromov-Wasserstein Learning for Graph Matching and Node Embedding

2 code implementations17 Jan 2019 Hongteng Xu, Dixin Luo, Hongyuan Zha, Lawrence Carin

A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes.

Graph Matching

GO Gradient for Expectation-Based Objectives

1 code implementation ICLR 2019 Yulai Cong, Miaoyun Zhao, Ke Bai, Lawrence Carin

Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters $\gammav$ for expectation-based objectives $\Ebb_{q_{\gammav} (\yv)} [f(\yv)]$.

Adversarial Learning of a Sampler Based on an Unnormalized Distribution

1 code implementation3 Jan 2019 Chunyuan Li, Ke Bai, Jianqiao Li, Guoyin Wang, Changyou Chen, Lawrence Carin

We investigate adversarial learning in the case when only an unnormalized form of the density can be accessed, rather than samples.

Q-Learning

Revisiting the Softmax Bellman Operator: New Benefits and New Perspective

2 code implementations2 Dec 2018 Zhao Song, Ronald E. Parr, Lawrence Carin

The impact of softmax on the value function itself in reinforcement learning (RL) is often viewed as problematic because it leads to sub-optimal value (or Q) functions and interferes with the contraction properties of the Bellman operator.

Atari Games Q-Learning

Generative Adversarial Network Training is a Continual Learning Problem

no code implementations ICLR 2019 Kevin J Liang, Chunyuan Li, Guoyin Wang, Lawrence Carin

We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator.

Continual Learning Text Generation

Sequence Generation with Guider Network

no code implementations2 Nov 2018 Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Liqun Chen, Dinghan Shen, Guoyin Wang, Lawrence Carin

Sequence generation with reinforcement learning (RL) has received significant attention recently.

reinforcement Learning

Hierarchically-Structured Variational Autoencoders for Long Text Generation

no code implementations27 Sep 2018 Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Lawrence Carin

Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation.

Text Generation

Distilled Wasserstein Learning for Word Embedding and Topic Modeling

no code implementations NeurIPS 2018 Hongteng Xu, Wenlin Wang, Wei Liu, Lawrence Carin

When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports.

Mortality Prediction Word Embeddings

Certified Adversarial Robustness with Additive Noise

2 code implementations NeurIPS 2019 Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

The existence of adversarial data examples has drawn significant attention in the deep-learning community; such data are seemingly minimally perturbed relative to the original data, but lead to very different outputs from a deep-learning algorithm.

Adversarial Attack Adversarial Robustness

Predicting Smoking Events with a Time-Varying Semi-Parametric Hawkes Process Model

no code implementations5 Sep 2018 Matthew Engelhard, Hongteng Xu, Lawrence Carin, Jason A Oliver, Matthew Hallyburton, F Joseph McClernon

Health risks from cigarette smoking -- the leading cause of preventable death in the United States -- can be substantially reduced by quitting.

Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory

no code implementations5 Sep 2018 Jianyi Zhang, Ruiyi Zhang, Lawrence Carin, Changyou Chen

Particle-optimization-based sampling (POS) is a recently developed effective sampling technique that interactively updates a set of particles.

POS

Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment

no code implementations EMNLP 2018 Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin

Network embeddings, which learn low-dimensional representations for each vertex in a large-scale network, have received considerable attention in recent years.

Link Prediction Network Embedding +1

Policy Optimization as Wasserstein Gradient Flows

no code implementations ICML 2018 Ruiyi Zhang, Changyou Chen, Chunyuan Li, Lawrence Carin

Policy optimization is a core component of reinforcement learning (RL), and most existing RL methods directly optimize parameters of a policy based on maximizing the expected total reward, or its surrogate.

Understanding and Accelerating Particle-Based Variational Inference

1 code implementation4 Jul 2018 Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu, Lawrence Carin

Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations.

Bayesian Inference Variational Inference

JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets

2 code implementations ICML 2018 Yunchen Pu, Shuyang Dai, Zhe Gan, Wei-Yao Wang, Guoyin Wang, Yizhe Zhang, Ricardo Henao, Lawrence Carin

Distinct from most existing approaches, that only learn conditional distributions, the proposed model aims to learn a joint distribution of multiple random variables (domains).

Diffusion Maps for Textual Network Embedding

no code implementations NeurIPS 2018 Xinyuan Zhang, Yitong Li, Dinghan Shen, Lawrence Carin

Textual network embedding leverages rich text information associated with the network to learn low-dimensional vectorial representations of vertices.

General Classification Link Prediction +1

Joint Embedding of Words and Labels for Text Classification

2 code implementations ACL 2018 Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin

Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences.

General Classification Sentiment Analysis +2

Adversarial Time-to-Event Modeling

4 code implementations ICML 2018 Paidamoyo Chapfuwa, Chenyang Tao, Chunyuan Li, Courtney Page, Benjamin Goldstein, Lawrence Carin, Ricardo Henao

Modern health data science applications leverage abundant molecular and electronic health data, providing opportunities for machine learning to build statistical models to support clinical practice.

Survival Analysis

Superposition-Assisted Stochastic Optimization for Hawkes Processes

no code implementations13 Feb 2018 Hongteng Xu, Xu Chen, Lawrence Carin

We consider the learning of multi-agent Hawkes processes, a model containing multiple Hawkes processes with shared endogenous impact functions and different exogenous intensities.

Sequential Recommendation Stochastic Optimization

Multi-Label Learning from Medical Plain Text with Convolutional Residual Models

no code implementations15 Jan 2018 Xinyuan Zhang, Ricardo Henao, Zhe Gan, Yitong Li, Lawrence Carin

Since diagnoses are typically correlated, a deep residual network is employed on top of the CNN encoder, to capture label (diagnosis) dependencies and incorporate information directly from the encoded sentence vector.

General Classification Multi-Label Classification +3

On the Use of Word Embeddings Alone to Represent Natural Language Sequences

no code implementations ICLR 2018 Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Ricardo Henao, Lawrence Carin

In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models.

Word Embeddings

Learning Structural Weight Uncertainty for Sequential Decision-Making

1 code implementation30 Dec 2017 Ruiyi Zhang, Chunyuan Li, Changyou Chen, Lawrence Carin

Learning probability distributions on the weights of neural networks (NNs) has recently proven beneficial in many applications.

Decision Making Multi-Armed Bandits +1

Topic Compositional Neural Language Model

no code implementations28 Dec 2017 Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, Lawrence Carin

The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a Mixture-of-Experts (MoE) language model, where each expert (corresponding to one topic) is a recurrent neural network (RNN) that accounts for learning the local structure of a word sequence.

Language Modelling

On Connecting Stochastic Gradient MCMC and Differential Privacy

no code implementations25 Dec 2017 Bai Li, Changyou Chen, Hao liu, Lawrence Carin

Significant success has been realized recently on applying machine learning to real-world applications.

BIG-bench Machine Learning

Cross-Spectral Factor Analysis

no code implementations NeurIPS 2017 Neil Gallagher, Kyle R. Ulrich, Austin Talbot, Kafui Dzirasa, Lawrence Carin, David E. Carlson

To facilitate understanding of network-level synchronization between brain regions, we introduce a novel model of multisite low-frequency neural recordings, such as local field potentials (LFPs) and electroencephalograms (EEGs).

Scalable Model Selection for Belief Networks

no code implementations NeurIPS 2017 Zhao Song, Yusuke Muraoka, Ryohei Fujimaki, Lawrence Carin

We propose a scalable algorithm for model selection in sigmoid belief networks (SBNs), based on the factorized asymptotic Bayesian (FAB) framework.

Model Selection

Targeting EEG/LFP Synchrony with Neural Nets

no code implementations NeurIPS 2017 Yitong Li, Michael Murias, Samantha Major, Geraldine Dawson, Kafui Dzirasa, Lawrence Carin, David E. Carlson

We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e. g., conventional deep learning methods).

EEG

Adversarial Symmetric Variational Autoencoder

no code implementations NeurIPS 2017 Yunchen Pu, Wei-Yao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, Lawrence Carin

A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: ($i$) from observed data fed through the encoder to yield codes, and ($ii$) from latent codes drawn from a simple prior and propagated through the decoder to manifest data.

Benefits from Superposed Hawkes Processes

no code implementations14 Oct 2017 Hongteng Xu, Dixin Luo, Xu Chen, Lawrence Carin

The superposition of Hawkes processes is demonstrated to be beneficial for tightening the upper bound of excess risk under certain conditions, and we show the feasibility of the benefit in typical situations.

Point Processes Recommendation Systems

Learning Registered Point Processes from Idiosyncratic Observations

no code implementations ICML 2018 Hongteng Xu, Lawrence Carin, Hongyuan Zha

A parametric point process model is developed, with modeling based on the assumption that sequential observations often share latent phenomena, while also possessing idiosyncratic effects.

Point Processes

Deconvolutional Latent-Variable Model for Text Sequence Matching

no code implementations21 Sep 2017 Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, Lawrence Carin

A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives.

Text Matching

Triangle Generative Adversarial Networks

1 code implementation NeurIPS 2017 Zhe Gan, Liqun Chen, Wei-Yao Wang, Yunchen Pu, Yizhe Zhang, Hao liu, Chunyuan Li, Lawrence Carin

The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs.

Image-to-Image Translation Semi-Supervised Image Classification +1

A Probabilistic Framework for Nonlinearities in Stochastic Neural Networks

no code implementations NeurIPS 2017 Qinliang Su, Xuejun Liao, Lawrence Carin

We present a probabilistic framework for nonlinearities, based on doubly truncated Gaussian distributions.

An inner-loop free solution to inverse problems using deep neural networks

no code implementations NeurIPS 2017 Qi Wei, Kai Fan, Lawrence Carin, Katherine A. Heller

For matrix inversion in the second sub-problem, we learn a convolutional neural network to approximate the matrix inversion, i. e., the inverse mapping is learned by feeding the input through the learned forward network.

Denoising

Symmetric Variational Autoencoder and Connections to Adversarial Learning

2 code implementations6 Sep 2017 Liqun Chen, Shuyang Dai, Yunchen Pu, Chunyuan Li, Qinliang Su, Lawrence Carin

A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence.

ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching

5 code implementations NeurIPS 2017 Chunyuan Li, Hao liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao, Lawrence Carin

We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching.

Continuous-Time Flows for Efficient Inference and Density Estimation

no code implementations ICML 2018 Changyou Chen, Chunyuan Li, Liqun Chen, Wenlin Wang, Yunchen Pu, Lawrence Carin

Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees.

Density Estimation

A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC

no code implementations4 Sep 2017 Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, Lawrence Carin

However, there has been little theoretical analysis of the impact of minibatch size to the algorithm's convergence rate.

Stochastic Optimization

Deconvolutional Paragraph Representation Learning

4 code implementations NeurIPS 2017 Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, Lawrence Carin

Learning latent representations from long text sequences is an important first step in many natural language processing applications.

General Classification Representation Learning +1

Stochastic Gradient Monomial Gamma Sampler

no code implementations ICML 2017 Yizhe Zhang, Changyou Chen, Zhe Gan, Ricardo Henao, Lawrence Carin

A framework is proposed to improve the sampling efficiency of stochastic gradient MCMC, based on Hamiltonian Monte Carlo.

VAE Learning via Stein Variational Gradient Descent

no code implementations NeurIPS 2017 Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, Lawrence Carin

A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent.

Compressive Sensing via Convolutional Factor Analysis

no code implementations11 Jan 2017 Xin Yuan, Yunchen Pu, Lawrence Carin

During reconstruction and testing, we project the upper layer dictionary to the data level and only a single layer deconvolution is required.

Compressive Sensing General Classification

Linear Feature Encoding for Reinforcement Learning

no code implementations NeurIPS 2016 Zhao Song, Ronald E. Parr, Xuejun Liao, Lawrence Carin

We then develop a supervised linear feature encoding method that is motivated by insights from linear value function approximation theory, as well as empirical successes from deep RL.

reinforcement-learning reinforcement Learning

Learning Generic Sentence Representations Using Convolutional Neural Networks

no code implementations EMNLP 2017 Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, Lawrence Carin

We propose a new encoder-decoder approach to learn distributed sentence representations that are applicable to multiple purposes.

Semantic Compositional Networks for Visual Captioning

1 code implementation CVPR 2017 Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, Li Deng

The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag.

Image Captioning Semantic Composition +1

Adaptive Feature Abstraction for Translating Video to Text

no code implementations23 Nov 2016 Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin

Previous models for video captioning often use the output from a specific layer of a Convolutional Neural Network (CNN) as video features.

Video Captioning

Unsupervised Learning with Truncated Gaussian Graphical Models

no code implementations15 Nov 2016 Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, Lawrence Carin

Gaussian graphical models (GGMs) are widely used for statistical modeling, because of ease of inference and the ubiquitous use of the normal distribution in practical approximations.

Unsupervised Pre-training