Search Results for author: Yangyang Kang

Found 24 papers, 11 papers with code

Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling

1 code implementation14 Dec 2020 Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, Xiaozhong Liu

In a customer service system, dialogue summarization can boost service efficiency by automatically creating summaries for long spoken dialogues in which customers and agents try to address issues about specific topics.

Goal-Oriented Prompt Attack and Safety Evaluation for LLMs

1 code implementation21 Sep 2023 Chengyuan Liu, Fubang Zhao, Lizhi Qing, Yangyang Kang, Changlong Sun, Kun Kuang, Fei Wu

There are several black-box attack methods, such as Prompt Attack, which can change the behaviour of LLMs and induce LLMs to generate unexpected answers with harmful contents.

H2CGL: Modeling Dynamics of Citation Network for Impact Prediction

1 code implementation16 Apr 2023 Guoxiu He, Zhikai Xue, Zhuoren Jiang, Yangyang Kang, Star Zhao, Wei Lu

Then, a novel graph neural network, Hierarchical and Heterogeneous Contrastive Graph Learning Model (H2CGL), is proposed to incorporate heterogeneity and dynamics of the citation network.

Contrastive Learning Graph Learning

A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff and Service Satisfaction Analysis

1 code implementation EMNLP 2021 Jiawei Liu, Kaisong Song, Yangyang Kang, Guoxiu He, Zhuoren Jiang, Changlong Sun, Wei Lu, Xiaozhong Liu

Chatbot is increasingly thriving in different domains, however, because of unexpected discourse complexity and training data sparseness, its potential distrust hatches vital apprehension.

Chatbot Multi-Task Learning

Read Beyond the Lines: Understanding the Implied Textual Meaning via a Skim and Intensive Reading Model

no code implementations3 Jan 2020 Guoxiu He, Zhe Gao, Zhuoren Jiang, Yangyang Kang, Changlong Sun, Xiaozhong Liu, Wei Lu

The nonliteral interpretation of a text is hard to be understood by machine models due to its high context-sensitivity and heavy usage of figurative language.

Reading Comprehension Sentence

Camouflaged Chinese Spam Content Detection with Semi-supervised Generative Active Learning

no code implementations ACL 2020 Zhuoren Jiang, Zhe Gao, Yu Duan, Yangyang Kang, Changlong Sun, Qiong Zhang, Xiaozhong Liu

We propose a Semi-supervIsed GeNerative Active Learning (SIGNAL) model to address the imbalance, efficiency, and text camouflage problems of Chinese text spam detection task.

Active Learning Chinese Spam Detection +2

Semi-Supervised Active Learning for Semi-Supervised Models: Exploit Adversarial Examples With Graph-Based Virtual Labels

no code implementations ICCV 2021 Jiannan Guo, Haochen Shi, Yangyang Kang, Kun Kuang, Siliang Tang, Zhuoren Jiang, Changlong Sun, Fei Wu, Yueting Zhuang

Although current mainstream methods begin to combine SSL and AL (SSL-AL) to excavate the diverse expressions of unlabeled samples, these methods' fully supervised task models are still trained only with labeled data.

Active Learning

Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of Semi-Supervised Learning and Active Learning

no code implementations7 Jun 2022 Jiannan Guo, Yangyang Kang, Yu Duan, Xiaozhong Liu, Siliang Tang, Wenqiao Zhang, Kun Kuang, Changlong Sun, Fei Wu

Motivated by the industry practice of labeling data, we propose an innovative Inconsistency-based virtual aDvErsarial Active Learning (IDEAL) algorithm to further investigate SSL-AL's potential superiority and achieve mutual enhancement of AL and SSL, i. e., SSL propagates label information to unlabeled samples and provides smoothed embeddings for AL, while AL excludes samples with inconsistent predictions and considerable uncertainty for SSL.

Active Learning

PPN: Parallel Pointer-based Network for Key Information Extraction with Complex Layouts

no code implementations20 Jul 2023 Kaiwen Wei, Jie Yao, Jingyuan Zhang, Yangyang Kang, Fubang Zhao, Yating Zhang, Changlong Sun, Xin Jin, Xin Zhang

Firstly, the layout of existing datasets is relatively fixed and limited in the number of semantic entity categories, creating a significant gap between these datasets and the complex real-world scenarios.

Key Information Extraction

Community-Based Hierarchical Positive-Unlabeled (PU) Model Fusion for Chronic Disease Prediction

1 code implementation6 Sep 2023 Yang Wu, Xurui Li, Xuhong Zhang, Yangyang Kang, Changlong Sun, Xiaozhong Liu

Positive-Unlabeled (PU) Learning is a challenge presented by binary classification problems where there is an abundance of unlabeled data along with a small number of positive data instances, which can be used to address chronic disease screening problem.

Binary Classification Data Augmentation +3

Memory-Augmented LLM Personalization with Short- and Long-Term Memory Coordination

no code implementations21 Sep 2023 Kai Zhang, Fubang Zhao, Yangyang Kang, Xiaozhong Liu

However, we contend that a mere memory module is inadequate to comprehend a user's preference, and fully training an LLM can be excessively costly.

Disentangling the Potential Impacts of Papers into Diffusion, Conformity, and Contribution Values

no code implementations15 Nov 2023 Zhikai Xue, Guoxiu He, Zhuoren Jiang, Sichen Gu, Yangyang Kang, Star Zhao, Wei Lu

In this study, we propose a novel graph neural network to Disentangle the Potential impacts of Papers into Diffusion, Conformity, and Contribution values (called DPPDCC).

Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery

1 code implementation19 Dec 2023 Pengwei Yan, Kaisong Song, Zhuoren Jiang, Yangyang Kang, Tianqianjin Lin, Changlong Sun, Xiaozhong Liu

While self-supervised graph pretraining techniques have shown promising results in various domains, their application still experiences challenges of limited topology learning, human knowledge dependency, and incompetent multi-level interactions.

Representation Learning Transfer Learning

Evolving Knowledge Distillation with Large Language Models and Active Learning

no code implementations11 Mar 2024 Chengyuan Liu, Yangyang Kang, Fubang Zhao, Kun Kuang, Zhuoren Jiang, Changlong Sun, Fei Wu

In this paper, we propose EvoKD: Evolving Knowledge Distillation, which leverages the concept of active learning to interactively enhance the process of data generation using large language models, simultaneously improving the task capabilities of small domain model (student model).

Active Learning Knowledge Distillation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.