Search Results for author: Dingcheng Li

Found 17 papers, 2 papers with code

Learning to Selectively Learn for Weakly Supervised Paraphrase Generation with Model-based Reinforcement Learning

no code implementations NAACL 2022 Haiyan Yin, Dingcheng Li, Ping Li

In this paper, we propose a new weakly supervised paraphrase generation approach that extends the success of a recent work that leverages reinforcement learning for effective model training with data selection.

Model-based Reinforcement Learning Paraphrase Generation

Contextual Rephrase Detection for Reducing Friction in Dialogue Systems

no code implementations EMNLP 2021 Zhuoyi Wang, Saurabh Gupta, Jie Hao, Xing Fan, Dingcheng Li, Alexander Hanbo Li, Chenlei Guo

Rephrase detection is used to identify the rephrases and has long been treated as a task with pairwise input, which does not fully utilize the contextual information (e. g. users’ implicit feedback).

Friction

A Deep Decomposable Model for Disentangling Syntax and Semantics in Sentence Representation

no code implementations Findings (EMNLP) 2021 Dingcheng Li, Hongliang Fei, Shaogang Ren, Ping Li

Recently, disentanglement based on a generative adversarial network or a variational autoencoder has significantly advanced the performance of diverse applications in CV and NLP domains.

Disentanglement Generative Adversarial Network +3

PromptGen: Automatically Generate Prompts using Generative Models

no code implementations Findings (NAACL) 2022 Yue Zhang, Hongliang Fei, Dingcheng Li, Ping Li

Recently, prompt learning has received significant attention, where the downstream tasks are reformulated to the mask-filling task with the help of a textual prompt.

Knowledge Probing Sentence

Word Embedding with Neural Probabilistic Prior

no code implementations21 Sep 2023 Shaogang Ren, Dingcheng Li, Ping Li

To improve word representation learning, we propose a probabilistic prior which can be seamlessly integrated with word embedding models.

Representation Learning

A Tale of Two Latent Flows: Learning Latent Space Normalizing Flow with Short-run Langevin Flow for Approximate Inference

no code implementations23 Jan 2023 Jianwen Xie, Yaxuan Zhu, Yifei Xu, Dingcheng Li, Ping Li

We study a normalizing flow in the latent space of a top-down generator model, in which the normalizing flow model plays the role of the informative prior model of the generator.

Anomaly Detection Image Inpainting +1

Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models

no code implementations19 Oct 2022 Yue Zhang, Hongliang Fei, Dingcheng Li, Tan Yu, Ping Li

In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define $K$ image prototypes and $K$ prompt prototypes.

Few-Shot Learning

Variational Flow Graphical Model

no code implementations6 Jul 2022 Shaogang Ren, Belhal Karimi, Dingcheng Li, Ping Li

VFGs learn the representation of high dimensional data via a message-passing scheme by integrating flow-based functions through variational inference.

Representation Learning Variational Inference

Learning to Selectively Learn for Weakly-supervised Paraphrase Generation

no code implementations EMNLP 2021 Kaize Ding, Dingcheng Li, Alexander Hanbo Li, Xing Fan, Chenlei Guo, Yang Liu, Huan Liu

In this work, we go beyond the existing paradigms and propose a novel approach to generate high-quality paraphrases with weak supervision data.

Language Modelling Meta-Learning +2

Estimate the Implicit Likelihoods of GANs with Application to Anomaly Detection

1 code implementation20 Apr 2020 Shaogang Ren, Dingcheng Li, Zhixin Zhou, Ping Li

The thriving of deep models and generative models provides approaches to model high dimensional distributions.

Anomaly Detection

Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation

no code implementations12 Mar 2020 Haiyan Yin, Dingcheng Li, Xu Li, Ping Li

To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse.

Adversarial Text Language Modelling +2

Context-aware Active Multi-Step Reinforcement Learning

no code implementations11 Nov 2019 Gang Chen, Dingcheng Li, ran Xu

Then given the selected samples, we propose the adaptive multi-step TD, which generalizes TD($\lambda$), but adaptively switch on/off the backups from future returns of different steps.

Active Learning Decision Making +2

Integration of Knowledge Graph Embedding Into Topic Modeling with Hierarchical Dirichlet Process

no code implementations NAACL 2019 Dingcheng Li, Siamak Zamani, Jingyuan Zhang, Ping Li

Leveraging domain knowledge is an effective strategy for enhancing the quality of inferred low-dimensional representations of documents by topic models.

Document Classification General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.