Search Results for author: Shujian Zhang

Found 11 papers, 8 papers with code

ALLSH: Active Learning Guided by Local Sensitivity and Hardness

1 code implementation10 May 2022 Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou

Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.

Active Learning Few-Shot Learning

FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization

1 code implementation2 Dec 2021 Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu

We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.

Text to image generation Zero-Shot Text-to-Image Generation

Alignment Attention by Matching Key and Query Distributions

1 code implementation NeurIPS 2021 Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou

The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains.

Graph Attention Question Answering +1

A Prototype-Oriented Framework for Unsupervised Domain Adaptation

1 code implementation NeurIPS 2021 Korawat Tanwisuth, Xinjie Fan, Huangjie Zheng, Shujian Zhang, Hao Zhang, Bo Chen, Mingyuan Zhou

Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space.

Unsupervised Domain Adaptation

Crossformer: Transformer with Alternated Cross-Layer Guidance

no code implementations29 Sep 2021 Shujian Zhang, Zhibin Duan, Huangjie Zheng, Pengcheng He, Bo Chen, Weizhu Chen, Mingyuan Zhou

Crossformer with states sharing not only provides the desired cross-layer guidance and regularization but also reduces the memory requirement.

Machine Translation Node Classification +2

Learning with Different Amounts of Annotation: From Zero to Many Labels

1 code implementation EMNLP 2021 Shujian Zhang, Chengyue Gong, Eunsol Choi

Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples.

Data Augmentation Entity Typing +1

Bayesian Attention Belief Networks

no code implementations9 Jun 2021 Shujian Zhang, Xinjie Fan, Bo Chen, Mingyuan Zhou

Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.

Machine Translation Question Answering +2

Capturing Label Distribution: A Case Study in NLI

no code implementations13 Feb 2021 Shujian Zhang, Chengyue Gong, Eunsol Choi

We depart from the standard practice of collecting a single reference per each training example, and find that collecting multiple references can achieve better accuracy under the fixed annotation budget.

Natural Language Inference

Bayesian Attention Modules

1 code implementation NeurIPS 2020 Xinjie Fan, Shujian Zhang, Bo Chen, Mingyuan Zhou

Attention modules, as simple and effective tools, have not only enabled deep neural networks to achieve state-of-the-art results in many domains, but also enhanced their interpretability.

Image Captioning Machine Translation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.