1 code implementation • 10 May 2022 • Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou
Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.
1 code implementation • 2 Dec 2021 • Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu
We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.
Ranked #14 on
Text-to-Image Generation
on COCO
1 code implementation • NeurIPS 2021 • Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou
The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains.
1 code implementation • NeurIPS 2021 • Korawat Tanwisuth, Xinjie Fan, Huangjie Zheng, Shujian Zhang, Hao Zhang, Bo Chen, Mingyuan Zhou
Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space.
no code implementations • 29 Sep 2021 • Shujian Zhang, Zhibin Duan, Huangjie Zheng, Pengcheng He, Bo Chen, Weizhu Chen, Mingyuan Zhou
Crossformer with states sharing not only provides the desired cross-layer guidance and regularization but also reduces the memory requirement.
1 code implementation • EMNLP 2021 • Shujian Zhang, Chengyue Gong, Eunsol Choi
Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples.
no code implementations • 9 Jun 2021 • Shujian Zhang, Xinjie Fan, Bo Chen, Mingyuan Zhou
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
1 code implementation • Findings (ACL) 2021 • Shujian Zhang, Chengyue Gong, Eunsol Choi
We study calibration in question answering, estimating whether model correctly predicts answer for each question.
1 code implementation • ICLR 2021 • Xinjie Fan, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, Mingyuan Zhou
However, the quality of uncertainty estimation is highly dependent on the dropout probabilities.
no code implementations • 13 Feb 2021 • Shujian Zhang, Chengyue Gong, Eunsol Choi
We depart from the standard practice of collecting a single reference per each training example, and find that collecting multiple references can achieve better accuracy under the fixed annotation budget.
1 code implementation • NeurIPS 2020 • Xinjie Fan, Shujian Zhang, Bo Chen, Mingyuan Zhou
Attention modules, as simple and effective tools, have not only enabled deep neural networks to achieve state-of-the-art results in many domains, but also enhanced their interpretability.