1 code implementation • 27 Oct 2022 • Yu Cao, Dianqi Li, Meng Fang, Tianyi Zhou, Jun Gao, Yibing Zhan, DaCheng Tao
We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models that produces fluent and grammatical adversarial contexts while maintaining gold answers.
1 code implementation • Findings (NAACL) 2022 • Yibin Lei, Yu Cao, Dianqi Li, Tianyi Zhou, Meng Fang, Mykola Pechenizkiy
Generating high-quality textual adversarial examples is critical for investigating the pitfalls of natural language processing (NLP) models and further promoting their robustness.
no code implementations • 1 Jan 2021 • Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin
There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence.
1 code implementation • NAACL 2021 • Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.
no code implementations • ACL 2020 • Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.
no code implementations • 13 May 2020 • Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, Jingjing Liu
To realize high-quality style transfer with natural context preservation, we propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.
no code implementations • 2 Mar 2020 • Yitong Li, Dianqi Li, Sushant Prakash, Peng Wang
To improve the interpretability in the dual encoder models, we design a novel regularization loss to minimize the mutual information between unimportant words and desired labels, in addition to the original attention method, so that important words are emphasized while unimportant words are de-emphasized.
1 code implementation • IJCNLP 2019 • Dianqi Li, Yizhe Zhang, Zhe Gan, Yu Cheng, Chris Brockett, Ming-Ting Sun, Bill Dolan
These data may demonstrate domain shift, which impedes the benefits of utilizing such data for training.
1 code implementation • 3 Apr 2018 • Dianqi Li, Qiuyuan Huang, Xiaodong He, Lei Zhang, Ming-Ting Sun
By contrasting with human-written captions and image-mismatched captions, the caption generator effectively exploits the inherent characteristics of human languages, and generates more discriminative captions.
1 code implementation • NeurIPS 2017 • Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, Ming-Ting Sun
Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group.
Ranked #1 on
Text Generation
on Chinese Poems