1 code implementation • EMNLP 2020 • Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, Huan Liu
Text classification is a critical research topic with broad applications in natural language processing.
no code implementations • NAACL 2019 • Dingcheng Li, Siamak Zamani, Jingyuan Zhang, Ping Li
Leveraging domain knowledge is an effective strategy for enhancing the quality of inferred low-dimensional representations of documents by topic models.
no code implementations • ACL 2019 • Hongliang Fei, Xu Li, Dingcheng Li, Ping Li
Recent neural network models have significantly advanced the task of coreference resolution.
Ranked #13 on Coreference Resolution on CoNLL 2012
no code implementations • 11 Nov 2019 • Gang Chen, Dingcheng Li, ran Xu
Then given the selected samples, we propose the adaptive multi-step TD, which generalizes TD($\lambda$), but adaptively switch on/off the backups from future returns of different steps.
no code implementations • 12 Mar 2020 • Haiyan Yin, Dingcheng Li, Xu Li, Ping Li
To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse.
1 code implementation • 20 Apr 2020 • Shaogang Ren, Dingcheng Li, Zhixin Zhou, Ping Li
The thriving of deep models and generative models provides approaches to model high dimensional distributions.
no code implementations • EMNLP 2021 • Kaize Ding, Dingcheng Li, Alexander Hanbo Li, Xing Fan, Chenlei Guo, Yang Liu, Huan Liu
In this work, we go beyond the existing paradigms and propose a novel approach to generate high-quality paraphrases with weak supervision data.
no code implementations • Findings (EMNLP) 2021 • Dingcheng Li, Hongliang Fei, Shaogang Ren, Ping Li
Recently, disentanglement based on a generative adversarial network or a variational autoencoder has significantly advanced the performance of diverse applications in CV and NLP domains.
no code implementations • EMNLP 2021 • Zhuoyi Wang, Saurabh Gupta, Jie Hao, Xing Fan, Dingcheng Li, Alexander Hanbo Li, Chenlei Guo
Rephrase detection is used to identify the rephrases and has long been treated as a task with pairwise input, which does not fully utilize the contextual information (e. g. users’ implicit feedback).
no code implementations • 6 Jul 2022 • Shaogang Ren, Belhal Karimi, Dingcheng Li, Ping Li
VFGs learn the representation of high dimensional data via a message-passing scheme by integrating flow-based functions through variational inference.
no code implementations • Findings (NAACL) 2022 • Yue Zhang, Hongliang Fei, Dingcheng Li, Ping Li
Recently, prompt learning has received significant attention, where the downstream tasks are reformulated to the mask-filling task with the help of a textual prompt.
no code implementations • NAACL 2022 • Haiyan Yin, Dingcheng Li, Ping Li
In this paper, we propose a new weakly supervised paraphrase generation approach that extends the success of a recent work that leverages reinforcement learning for effective model training with data selection.
no code implementations • NAACL 2022 • Dingcheng Li, Zheng Chen, Eunah Cho, Jie Hao, Xiaohu Liu, Fan Xing, Chenlei Guo, Yang Liu
Seq2seq language generation models that are trained offline with multiple domains in a sequential fashion often suffer from catastrophic forgetting.
no code implementations • 19 Oct 2022 • Yue Zhang, Hongliang Fei, Dingcheng Li, Tan Yu, Ping Li
In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define $K$ image prototypes and $K$ prompt prototypes.
no code implementations • 23 Jan 2023 • Jianwen Xie, Yaxuan Zhu, Yifei Xu, Dingcheng Li, Ping Li
We study a normalizing flow in the latent space of a top-down generator model, in which the normalizing flow model plays the role of the informative prior model of the generator.
no code implementations • 21 Sep 2023 • Shaogang Ren, Dingcheng Li, Ping Li
To improve word representation learning, we propose a probabilistic prior which can be seamlessly integrated with word embedding models.