Search Results for author: Yizhe Zhang

Found 106 papers, 43 papers with code

What Makes Good In-Context Examples for GPT-3?

no code implementations DeeLIO (ACL) 2022 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3’s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt.

In-Context Learning Natural Language Understanding +4

Automatic and Universal Prompt Injection Attacks against Large Language Models

1 code implementation7 Mar 2024 Xiaogeng Liu, Zhiyuan Yu, Yizhe Zhang, Ning Zhang, Chaowei Xiao

Large Language Models (LLMs) excel in processing and generating human language, powered by their ability to interpret and follow instructions.

How Far Are We from Intelligent Visual Deductive Reasoning?

1 code implementation7 Mar 2024 Yizhe Zhang, He Bai, Ruixiang Zhang, Jiatao Gu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly

Vision-Language Models (VLMs) such as GPT-4V have recently demonstrated incredible strides on diverse vision language tasks.

In-Context Learning Visual Reasoning

Exploring the Design of Generative AI in Supporting Music-based Reminiscence for Older Adults

no code implementations3 Mar 2024 Yucheng Jin, Wanling Cai, Li Chen, Yizhe Zhang, Gavin Doherty, Tonglin Jiang

Music-based reminiscence has the potential to positively impact the psychological well-being of older adults.

Divide-or-Conquer? Which Part Should You Distill Your LLM?

no code implementations22 Feb 2024 Zhuofeng Wu, He Bai, Aonan Zhang, Jiatao Gu, VG Vinod Vydiswaran, Navdeep Jaitly, Yizhe Zhang

Recent methods have demonstrated that Large Language Models (LLMs) can solve reasoning tasks better when they are encouraged to solve subtasks of the main task first.

Problem Decomposition

Executable Code Actions Elicit Better LLM Agents

1 code implementation1 Feb 2024 Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji

LLM agents are typically prompted to produce actions by generating JSON or text in a pre-defined format, which is usually limited by constrained action space (e. g., the scope of pre-defined tools) and restricted flexibility (e. g., inability to compose multiple tools).

Language Modelling Large Language Model

Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling

no code implementations29 Jan 2024 Pratyush Maini, Skyler Seto, He Bai, David Grangier, Yizhe Zhang, Navdeep Jaitly

Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased.

Language Modelling

Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Polyp Segmentation

no code implementations26 Dec 2023 Yunqi Gu, Tao Zhou, Yizhe Zhang, Yi Zhou, Kelei He, Chen Gong, Huazhu Fu

To address scale variation, we present a scale-enhanced consistency constraint, which ensures consistency in the segmentation maps generated from the same input image at different scales.

Segmentation

KGLens: A Parameterized Knowledge Graph Solution to Assess What an LLM Does and Doesn't Know

no code implementations15 Dec 2023 Shangshang Zheng, He Bai, Yizhe Zhang, Yi Su, Xiaochuan Niu, Navdeep Jaitly

Measuring the alignment between a Knowledge Graph (KG) and Large Language Models (LLMs) is an effective method to assess the factualness and identify the knowledge blind spots of LLMs.

Knowledge Graphs

Segment Anything Model-guided Collaborative Learning Network for Scribble-supervised Polyp Segmentation

no code implementations1 Dec 2023 Yiming Zhao, Tao Zhou, Yunqi Gu, Yi Zhou, Yizhe Zhang, Ye Wu, Huazhu Fu

Specifically, we first propose a Cross-level Enhancement and Aggregation Network (CEA-Net) for weakly-supervised polyp segmentation.

Segmentation Weakly supervised segmentation

Matryoshka Diffusion Models

no code implementations23 Oct 2023 Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Josh Susskind, Navdeep Jaitly

Diffusion models are the de facto approach for generating high-quality images and videos, but learning high-dimensional models remains a formidable task due to computational and optimization challenges.

Image Generation Zero-shot Generalization

Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games

1 code implementation2 Oct 2023 Yizhe Zhang, Jiarui Lu, Navdeep Jaitly

In this paper, we offer a surrogate problem which assesses an LLMs's capability to deduce an entity unknown to itself, but revealed to a judge, by asking the judge a series of queries.

Edge-aware Feature Aggregation Network for Polyp Segmentation

no code implementations19 Sep 2023 Tao Zhou, Yizhe Zhang, Geng Chen, Yi Zhou, Ye Wu, Deng-Ping Fan

Besides, a Scale-aware Convolution Module (SCM) is proposed to learn scale-aware features by using dilated convolutions with different ratios, in order to effectively deal with scale variation.

Segmentation

RR-CP: Reliable-Region-Based Conformal Prediction for Trustworthy Medical Image Classification

no code implementations9 Sep 2023 Yizhe Zhang, Shuo Wang, Yejia Zhang, Danny Z. Chen

Conformal prediction (CP) generates a set of predictions for a given test sample such that the prediction set almost always contains the true label (e. g., 99. 5\% of the time).

Conformal Prediction Decision Making +2

SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge for Semi-Supervised Learning in Medical Image Segmentation

1 code implementation26 Aug 2023 Yizhe Zhang, Tao Zhou, Shuo Wang, Ye Wu, Pengfei Gu, Danny Z. Chen

Our new method is iterative and consists of two main stages: (1) segmentation model training; (2) expanding the labeled set by using the trained segmentation model, an unlabeled set, SAM, and domain-specific knowledge.

Image Segmentation Lesion Segmentation +3

PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model

1 code implementation NeurIPS 2023 Yizhe Zhang, Jiatao Gu, Zhuofeng Wu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly

Autoregressive models for text sometimes generate repetitive and low-quality output because errors accumulate during the steps of generation.

Denoising

Can SAM Segment Polyps?

1 code implementation15 Apr 2023 Tao Zhou, Yizhe Zhang, Yi Zhou, Ye Wu, Chen Gong

Recently, Meta AI Research releases a general Segment Anything Model (SAM), which has demonstrated promising performance in several segmentation tasks.

Segmentation

Stabilizing Transformer Training by Preventing Attention Entropy Collapse

1 code implementation11 Mar 2023 Shuangfei Zhai, Tatiana Likhomanenko, Etai Littwin, Dan Busbridge, Jason Ramapuram, Yizhe Zhang, Jiatao Gu, Josh Susskind

We show that $\sigma$Reparam provides stability and robustness with respect to the choice of hyperparameters, going so far as enabling training (a) a Vision Transformer {to competitive performance} without warmup, weight decay, layer normalization or adaptive optimizers; (b) deep architectures in machine translation and (c) speech recognition to competitive performance without warmup and adaptive optimizers.

Automatic Speech Recognition Image Classification +6

Interactive Text Generation

no code implementations2 Mar 2023 Felix Faltings, Michel Galley, Baolin Peng, Kianté Brantley, Weixin Cai, Yizhe Zhang, Jianfeng Gao, Bill Dolan

Unfortunately, this means most of the research on text, code, and image generation has focused on non-interactive settings, whereby the model is expected to get everything right without accounting for any input from a user who may be willing to help.

Image Generation Imitation Learning +1

GPT4MIA: Utilizing Generative Pre-trained Transformer (GPT-3) as A Plug-and-Play Transductive Model for Medical Image Analysis

no code implementations17 Feb 2023 Yizhe Zhang, Danny Z. Chen

In this paper, we propose a novel approach (called GPT4MIA) that utilizes Generative Pre-trained Transformer (GPT) as a plug-and-play transductive inference tool for medical image analysis (MIA).

Image Classification Language Modelling

EndoBoost: a plug-and-play module for false positive suppression during computer-aided polyp detection in real-world colonoscopy (with dataset)

no code implementations23 Dec 2022 Haoran Wang, Yan Zhu, Wenzheng Qin, Yizhe Zhang, Pinghong Zhou, QuanLin Li, Shuo Wang, Zhijian Song

In addition, the released dataset can be used to perform 'stress' tests on established detection systems and encourages further research toward robust and reliable computer-aided endoscopic image analysis.

Anomaly Detection Density Estimation

Bridging the Training-Inference Gap for Dense Phrase Retrieval

no code implementations25 Oct 2022 Gyuwan Kim, Jinhyuk Lee, Barlas Oguz, Wenhan Xiong, Yizhe Zhang, Yashar Mehdad, William Yang Wang

Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search.

Open-Domain Question Answering Passage Retrieval +1

f-DM: A Multi-stage Diffusion Model via Progressive Signal Transformation

no code implementations10 Oct 2022 Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Miguel Angel Bautista, Josh Susskind

In this work, we propose f-DM, a generalized family of DMs which allows progressive signal transformation.

Image Generation

Data-Driven Deep Supervision for Skin Lesion Classification

no code implementations4 Sep 2022 Suraj Mishra, Yizhe Zhang, Li Zhang, Tianyu Zhang, X. Sharon Hu, Danny Z. Chen

Specifically, we analyze the convolutional network's behavior (field-of-view) to find the location of deep supervision for improved feature extraction.

Classification Lesion Classification +2

Usable Region Estimate for Assessing Practical Usability of Medical Image Segmentation Models

1 code implementation1 Jul 2022 Yizhe Zhang, Suraj Mishra, Peixian Liang, Hao Zheng, Danny Z. Chen

We aim to quantitatively measure the practical usability of medical image segmentation models: to what extent, how often, and on which samples a model's predictions can be used/trusted.

Image Segmentation Medical Image Segmentation +1

H-EMD: A Hierarchical Earth Mover's Distance Method for Instance Segmentation

no code implementations2 Jun 2022 Peixian Liang, Yizhe Zhang, Yifan Ding, Jianxu Chen, Chinedu S. Madukoma, Tim Weninger, Joshua D. Shrout, Danny Z. Chen

We observe that probability maps by DL semantic segmentation models can be used to generate many possible instance candidates, and accurate instance segmentation can be achieved by selecting from them a set of "optimized" candidates as output instances.

Image Segmentation Instance Segmentation +2

Linearizing Transformer with Key-Value Memory

no code implementations23 Mar 2022 Yizhe Zhang, Deng Cai

We demonstrate that MemSizer provides an improved balance between efficiency and accuracy over the vanilla transformer and other efficient transformer variants in three typical sequence generation tasks, including machine translation, abstractive text summarization, and language modeling.

Abstractive Text Summarization Language Modelling +2

Towards More Efficient Insertion Transformer with Fractional Positional Encoding

1 code implementation12 Dec 2021 Zhisong Zhang, Yizhe Zhang, Bill Dolan

Nevertheless, due to the incompatibility between absolute positional encoding and insertion-based generation schemes, it needs to refresh the encoding of every token in the generated partial hypothesis at each step, which could be costly.

Text Generation

HS3: Learning with Proper Task Complexity in Hierarchically Supervised Semantic Segmentation

no code implementations3 Nov 2021 Shubhankar Borse, Hong Cai, Yizhe Zhang, Fatih Porikli

While deeply supervised networks are common in recent literature, they typically impose the same learning objective on all transitional layers despite their varying representation powers.

Ranked #4 on Semantic Segmentation on Cityscapes test (using extra training data)

Segmentation Semantic Segmentation

X-Distill: Improving Self-Supervised Monocular Depth via Cross-Task Distillation

no code implementations24 Oct 2021 Hong Cai, Janarbek Matai, Shubhankar Borse, Yizhe Zhang, Amin Ansari, Fatih Porikli

In order to enable such knowledge distillation across two different visual tasks, we introduce a small, trainable network that translates the predicted depth map to a semantic segmentation map, which can then be supervised by the teacher network.

Knowledge Distillation Monocular Depth Estimation +2

Perceptual Consistency in Video Segmentation

no code implementations24 Oct 2021 Yizhe Zhang, Shubhankar Borse, Hong Cai, Ying Wang, Ning Bi, Xiaoyun Jiang, Fatih Porikli

More specifically, by measuring the perceptual consistency between the predicted segmentation and the available ground truth on a nearby frame and combining it with the segmentation confidence, we can accurately assess the classification correctness on each pixel.

Segmentation Semantic Segmentation +2

AuxAdapt: Stable and Efficient Test-Time Adaptation for Temporally Consistent Video Semantic Segmentation

1 code implementation24 Oct 2021 Yizhe Zhang, Shubhankar Borse, Hong Cai, Fatih Porikli

Since inconsistency mainly arises from the model's uncertainty in its output, we propose an adaptation scheme where the model learns from its own segmentation decisions as it streams a video, which allows producing more confident and temporally consistent labeling for similarly-looking pixels across frames.

Optical Flow Estimation Segmentation +4

Automatic Document Sketching: Generating Drafts from Analogous Texts

no code implementations Findings (ACL) 2021 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Bill Dolan

The advent of large pre-trained language models has made it possible to make high-quality predictions on how to add or change a sentence in a document.

Reinforcement Learning (RL) Sentence +1

RetGen: A Joint framework for Retrieval and Grounded Text Generation Modeling

1 code implementation14 May 2021 Yizhe Zhang, Siqi Sun, Xiang Gao, Yuwei Fang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan

We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal.

Dialogue Generation Language Modelling +1

A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation

2 code implementations ACL 2022 Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan

Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.

Hallucination Sentence +1

An Adversarially-Learned Turing Test for Dialog Generation Models

1 code implementation16 Apr 2021 Xiang Gao, Yizhe Zhang, Michel Galley, Bill Dolan

To alleviate this risk, we propose an adversarial training approach to learn a robust model, ATT (Adversarial Turing Test), that discriminates machine-generated responses from human-written replies.

Dialogue Evaluation

InverseForm: A Loss Function for Structured Boundary-Aware Segmentation

1 code implementation CVPR 2021 Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli

We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network, which efficiently learns the degree of parametric transformations between estimated and target boundaries.

Ranked #5 on Semantic Segmentation on Cityscapes test (using extra training data)

Segmentation Semantic Segmentation

Finetuning Pretrained Transformers into RNNs

1 code implementation EMNLP 2021 Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith

Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.

Language Modelling Machine Translation +1

Data Augmentation for Abstractive Query-Focused Multi-Document Summarization

1 code implementation2 Mar 2021 Ramakanth Pasunuru, Asli Celikyilmaz, Michel Galley, Chenyan Xiong, Yizhe Zhang, Mohit Bansal, Jianfeng Gao

The progress in Query-focused Multi-Document Summarization (QMDS) has been limited by the lack of sufficient largescale high-quality training datasets.

Data Augmentation Document Summarization +1

What Makes Good In-Context Examples for GPT-$3$?

3 code implementations17 Jan 2021 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt.

Few-Shot Learning Natural Language Understanding +4

SDA: Improving Text Generation with Self Data Augmentation

no code implementations2 Jan 2021 Ping Yu, Ruiyi Zhang, Yang Zhao, Yizhe Zhang, Chunyuan Li, Changyou Chen

Data augmentation has been widely used to improve deep neural networks in many research fields, such as computer vision.

Data Augmentation Imitation Learning +2

Towards Robust and Efficient Contrastive Textual Representation Learning

no code implementations1 Jan 2021 Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin

There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence.

Contrastive Learning Representation Learning

Narrative Incoherence Detection

no code implementations21 Dec 2020 Deng Cai, Yizhe Zhang, Yichen Huang, Wai Lam, Bill Dolan

We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding: Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.

Sentence Sentence Embedding

Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation

no code implementations17 Dec 2020 Hongxiao Wang, Hao Zheng, Jianxu Chen, Lin Yang, Yizhe Zhang, Danny Z. Chen

Second, we devise an effective data selection policy for judiciously sampling the generated images: (1) to make the generated training set better cover the dataset, the clusters that are underrepresented in the original training set are covered more; (2) to make the training process more effective, we identify and oversample the images of "hard cases" in the data for which annotated training data may be scarce.

Clustering Image Generation +3

Contextualized Perturbation for Textual Adversarial Attack

1 code implementation NAACL 2021 Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan

Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.

Adversarial Attack Language Modelling

Weakly supervised cross-domain alignment with optimal transport

no code implementations14 Aug 2020 Siyang Yuan, Ke Bai, Liqun Chen, Yizhe Zhang, Chenyang Tao, Chunyuan Li, Guoyin Wang, Ricardo Henao, Lawrence Carin

Cross-domain alignment between image objects and text sequences is key to many visual-language tasks, and it poses a fundamental challenge to both computer vision and natural language processing.

Improving Disentangled Text Representation Learning with Information-Theoretic Guidance

no code implementations ACL 2020 Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, Lawrence Carin

Learning disentangled representations of natural language is essential for many NLP tasks, e. g., conditional text generation, style transfer, personalized dialogue systems, etc.

Conditional Text Generation Representation Learning +2

POINTER: Constrained Progressive Text Generation via Insertion-based Generative Pre-training

1 code implementation EMNLP 2020 Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, Bill Dolan

Large-scale pre-trained language models, such as BERT and GPT-2, have achieved excellent performance in language representation learning and free-form text generation.

Language Modelling Representation Learning +1

A Controllable Model of Grounded Response Generation

1 code implementation1 May 2020 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill Dolan

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses.

Informativeness Response Generation

Contextual Text Style Transfer

no code implementations Findings of the Association for Computational Linguistics 2020 Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, Jingjing Liu

To realize high-quality style transfer with natural context preservation, we propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.

Sentence Style Transfer +2

Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space

1 code implementation EMNLP 2020 Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiujun Li, Yizhe Zhang, Jianfeng Gao

We hope that our first pre-trained big VAE language model itself and results can help the NLP community renew the interests of deep generative models in the era of large-scale pre-training, and make these principled methods more practical.

Language Modelling Representation Learning +1

Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation

1 code implementation ICLR 2020 Xinjie Fan, Yizhe Zhang, Zhendong Wang, Mingyuan Zhou

To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control.

Image Captioning Program Synthesis

INSET: Sentence Infilling with INter-SEntential Transformer

1 code implementation ACL 2020 Yichen Huang, Yizhe Zhang, Oussama Elachqar, Yu Cheng

Missing sentence generation (or sentence infilling) fosters a wide range of applications in natural language generation, such as document auto-completion and meeting note expansion.

Natural Language Understanding Sentence +1

Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation

no code implementations11 Sep 2019 Shuyang Dai, Yu Cheng, Yizhe Zhang, Zhe Gan, Jingjing Liu, Lawrence Carin

Recent unsupervised approaches to domain adaptation primarily focus on minimizing the gap between the source and the target domains through refining the feature generator, in order to learn a better alignment between the two domains.

domain classification Unsupervised Domain Adaptation

Structuring Latent Spaces for Stylized Response Generation

1 code implementation IJCNLP 2019 Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan

This structure allows the system to generate stylized relevant responses by sampling in the neighborhood of the conversation model prediction, and continuously control the style level.

Response Generation Style Transfer

Unsupervised Dialogue Spectrum Generation for Log Dialogue Ranking

no code implementations WS 2019 Xinnuo Xu, Yizhe Zhang, Lars Liden, Sungjin Lee

Although the data-driven approaches of some recent bot building platforms make it possible for a wide range of users to easily create dialogue systems, those platforms don{'}t offer tools for quickly identifying which log dialogues contain problems.

Second-order Non-local Attention Networks for Person Re-identification

no code implementations31 Aug 2019 Bryan, Xia, Yuan Gong, Yizhe Zhang, Christian Poellabauer

Recent efforts have shown promising results for person re-identification by designing part-based architectures to allow a neural network to learn discriminative representations from semantically coherent parts.

Person Re-Identification

Decompose-and-Integrate Learning for Multi-class Segmentation in Medical Images

no code implementations7 Jun 2019 Yizhe Zhang, Michael T. C. Ying, Danny Z. Chen

Ablation study confirms the effectiveness of our proposed learning scheme for medical images.

Segmentation

Consistent Dialogue Generation with Self-supervised Feature Learning

1 code implementation13 Mar 2019 Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan

Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents.

Dialogue Generation Response Generation

Jointly Optimizing Diversity and Relevance in Neural Response Generation

no code implementations NAACL 2019 Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan

In this paper, we propose a SpaceFusion model to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms.

Dialogue Generation Response Generation

SPDA: Superpixel-based Data Augmentation for Biomedical Image Segmentation

no code implementations28 Feb 2019 Yizhe Zhang, Lin Yang, Hao Zheng, Peixian Liang, Colleen Mangold, Raquel G. Loreto, David. P. Hughes, Danny Z. Chen

To better mimic human visual perception, we think it is desirable for the deep learning model to be able to perceive not only raw images but also SP images.

Data Augmentation Image Segmentation +1

Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models

no code implementations ACL 2019 Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, Lawrence Carin

Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation with latent variables.

Sentence Text Generation

Cascade Decoder: A Universal Decoding Method for Biomedical Image Segmentation

no code implementations15 Jan 2019 Peixian Liang, Jianxu Chen, Hao Zheng, Lin Yang, Yizhe Zhang, Danny Z. Chen

The cascade decoder structure aims to conduct more effective decoding of hierarchically encoded features and is more compatible with common encoders than the known decoders.

Image Segmentation Segmentation +1

A New Ensemble Learning Framework for 3D Biomedical Image Segmentation

1 code implementation10 Dec 2018 Hao Zheng, Yizhe Zhang, Lin Yang, Peixian Liang, Zhuo Zhao, Chaoli Wang, Danny Z. Chen

In this paper, we propose a new ensemble learning framework for 3D biomedical image segmentation that combines the merits of 2D and 3D models.

3D Medical Imaging Segmentation Ensemble Learning +3

Hierarchically-Structured Variational Autoencoders for Long Text Generation

no code implementations27 Sep 2018 Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Lawrence Carin

Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation.

Sentence Text Generation

JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets

2 code implementations ICML 2018 Yunchen Pu, Shuyang Dai, Zhe Gan, Wei-Yao Wang, Guoyin Wang, Yizhe Zhang, Ricardo Henao, Lawrence Carin

Distinct from most existing approaches, that only learn conditional distributions, the proposed model aims to learn a joint distribution of multiple random variables (domains).

Generative Adversarial Network

Joint Embedding of Words and Labels for Text Classification

2 code implementations ACL 2018 Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin

Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences.

General Classification Sentiment Analysis +2

A New Registration Approach for Dynamic Analysis of Calcium Signals in Organs

no code implementations1 Feb 2018 Peixian Liang, Jianxu Chen, Pavel A. Brodskiy, Qinfeng Wu, Yejia Zhang, Yizhe Zhang, Lin Yang, Jeremiah J. Zartman, Danny Z. Chen

A key to analyzing spatial-temporal patterns of $Ca^{2+}$ signal waves is to accurately align the pouches across image sequences.

On the Use of Word Embeddings Alone to Represent Natural Language Sequences

no code implementations ICLR 2018 Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Ricardo Henao, Lawrence Carin

In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models.

Sentence Word Embeddings

Deconvolutional Latent-Variable Model for Text Sequence Matching

no code implementations21 Sep 2017 Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, Lawrence Carin

A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives.

Sentence Text Matching

Triangle Generative Adversarial Networks

1 code implementation NeurIPS 2017 Zhe Gan, Liqun Chen, Wei-Yao Wang, Yunchen Pu, Yizhe Zhang, Hao liu, Chunyuan Li, Lawrence Carin

The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs.

Attribute Generative Adversarial Network +3

A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC

no code implementations4 Sep 2017 Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, Lawrence Carin

However, there has been little theoretical analysis of the impact of minibatch size to the algorithm's convergence rate.

Stochastic Optimization

Deconvolutional Paragraph Representation Learning

4 code implementations NeurIPS 2017 Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, Lawrence Carin

Learning latent representations from long text sequences is an important first step in many natural language processing applications.

General Classification Representation Learning +1

Stochastic Gradient Monomial Gamma Sampler

no code implementations ICML 2017 Yizhe Zhang, Changyou Chen, Zhe Gan, Ricardo Henao, Lawrence Carin

A framework is proposed to improve the sampling efficiency of stochastic gradient MCMC, based on Hamiltonian Monte Carlo.

Stochastic Gradient MCMC with Stale Gradients

no code implementations NeurIPS 2016 Changyou Chen, Nan Ding, Chunyuan Li, Yizhe Zhang, Lawrence Carin

In this paper we develop theory to show that while the bias and MSE of an SG-MCMC algorithm depend on the staleness of stochastic gradients, its estimation variance (relative to the expected estimate, based on a prescribed number of samples) is independent of it.

Towards Unifying Hamiltonian Monte Carlo and Slice Sampling

no code implementations NeurIPS 2016 Yizhe Zhang, Xiangyu Wang, Changyou Chen, Ricardo Henao, Kai Fan, Lawrence Carin

We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling, demonstrating their connection via the Hamiltonian-Jacobi equation from Hamiltonian mechanics.

Learning a Hybrid Architecture for Sequence Regression and Annotation

no code implementations16 Dec 2015 Yizhe Zhang, Ricardo Henao, Lawrence Carin, Jianling Zhong, Alexander J. Hartemink

When learning a hidden Markov model (HMM), sequen- tial observations can often be complemented by real-valued summary response variables generated from the path of hid- den states.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.