Search Results for author: Jiangnan Li

Found 14 papers, 10 papers with code

TAKE: Topic-shift Aware Knowledge sElection for Dialogue Generation

1 code implementation COLING 2022 Chenxu Yang, Zheng Lin, Jiangnan Li, Fandong Meng, Weiping Wang, Lanrui Wang, Jie zhou

The knowledge selector generally constructs a query based on the dialogue context and selects the most appropriate knowledge to help response generation.

Dialogue Generation Knowledge Distillation +1

Target Really Matters: Target-aware Contrastive Learning and Consistency Regularization for Few-shot Stance Detection

1 code implementation COLING 2022 Rui Liu, Zheng Lin, Huishan Ji, Jiangnan Li, Peng Fu, Weiping Wang

Despite the significant progress on this task, it is extremely time-consuming and budget-unfriendly to collect sufficient high-quality labeled data for every new target under fully-supervised learning, whereas unlabeled data can be collected easier.

Contrastive Learning Stance Detection

Question-Interlocutor Scope Realized Graph Modeling over Key Utterances for Dialogue Reading Comprehension

no code implementations26 Oct 2022 Jiangnan Li, Mo Yu, Fandong Meng, Zheng Lin, Peng Fu, Weiping Wang, Jie zhou

Although these tasks are effective, there are still urging problems: (1) randomly masking speakers regardless of the question cannot map the speaker mentioned in the question to the corresponding speaker in the dialogue, and ignores the speaker-centric nature of utterances.

Reading Comprehension

Empathetic Dialogue Generation via Sensitive Emotion Recognition and Sensible Knowledge Selection

1 code implementation21 Oct 2022 Lanrui Wang, Jiangnan Li, Zheng Lin, Fandong Meng, Chenxu Yang, Weiping Wang, Jie zhou

We use a fine-grained encoding strategy which is more sensitive to the emotion dynamics (emotion flow) in the conversations to predict the emotion-intent characteristic of response.

Dialogue Generation Emotion Recognition +2

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

1 code implementation11 Oct 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance.

Natural Language Understanding

A Hierarchical Transformer with Speaker Modeling for Emotion Recognition in Conversation

1 code implementation29 Dec 2020 Jiangnan Li, Zheng Lin, Peng Fu, Qingyi Si, Weiping Wang

It can be regarded as a personalized and interactive emotion recognition task, which is supposed to consider not only the semantic information of text but also the influences from speakers.

Emotion Recognition in Conversation

Learning Class-Transductive Intent Representations for Zero-shot Intent Detection

1 code implementation3 Dec 2020 Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, Weiping Wang

A critical problem behind these limitations is that the representations of unseen intents cannot be learned in the training stage.

Intent Detection Multi-Task Learning +1

Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks

no code implementations16 Oct 2020 Jiangnan Li, Yingyuan Yang, Jinyuan Stella Sun

In this work, we study the vulnerabilities of DL-based energy theft detection through adversarial attacks, including single-step attacks and iterative attacks.

ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems

no code implementations12 Mar 2020 Jiangnan Li, Yingyuan Yang, Jinyuan Stella Sun, Kevin Tomsovic, Hairong Qi

We study the potential vulnerabilities of ML applied in CPSs by proposing Constrained Adversarial Machine Learning (ConAML), which generates adversarial examples that satisfy the intrinsic constraints of the physical systems.

BIG-bench Machine Learning

SmartBullets: A Cloud-Assisted Bullet Screen Filter based on Deep Learning

1 code implementation15 May 2019 Haoran Niu, Jiangnan Li, Yu Zhao

Although the bullet-screen video websites have provided filter functions based on regular expression, bad bullets can still easily pass the filter through making a small modification.

Cannot find the paper you are looking for? You can Submit a new open access paper.