Search Results for author: Lingzhi Wang

Found 17 papers, 5 papers with code

Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models

no code implementations8 Feb 2024 Lingzhi Wang, Xingshan Zeng, Jinsong Guo, Kam-Fai Wong, Georg Gottlob

The aim of this study is to investigate Machine Unlearning (MU), a burgeoning field focused on addressing concerns related to neural models inadvertently retaining personal or sensitive data.

Computational Efficiency Language Modelling +1

IndiVec: An Exploration of Leveraging Large Language Models for Media Bias Detection with Fine-Grained Bias Indicators

no code implementations1 Feb 2024 Luyang Lin, Lingzhi Wang, Xiaoyan Zhao, Jing Li, Kam-Fai Wong

IndiVec begins by constructing a fine-grained media bias database, leveraging the robust instruction-following capabilities of large language models and vector database techniques.

Bias Detection Instruction Following

A Survey of the Evolution of Language Model-Based Dialogue Systems

no code implementations28 Nov 2023 Hongru Wang, Lingzhi Wang, Yiming Du, Liang Chen, Jingyan Zhou, YuFei Wang, Kam-Fai Wong

This survey delves into the historical trajectory of dialogue systems, elucidating their intricate relationship with advancements in language models by categorizing this evolution into four distinct stages, each marked by pivotal LM breakthroughs: 1) Early_Stage: characterized by statistical LMs, resulting in rule-based or machine-learning-driven dialogue_systems; 2) Independent development of TOD and ODD based on neural_language_models (NLM; e. g., LSTM and GRU), since NLMs lack intrinsic knowledge in their parameters; 3) fusion between different types of dialogue systems with the advert of pre-trained_language_models (PLMs), starting from the fusion between four_sub-tasks_within_TOD, and then TOD_with_ODD; and 4) current LLM-based_dialogue_system, wherein LLMs can be used to conduct TOD and ODD seamlessly.

Language Modelling

TPE: Towards Better Compositional Reasoning over Conceptual Tools with Multi-persona Collaboration

no code implementations28 Sep 2023 Hongru Wang, Huimin Wang, Lingzhi Wang, Minda Hu, Rui Wang, Boyang Xue, Hongyuan Lu, Fei Mi, Kam-Fai Wong

Large language models (LLMs) have demonstrated exceptional performance in planning the use of various functional tools, such as calculators and retrievers, particularly in question-answering tasks.

Question Answering Response Generation

KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with Inverse Transformation

no code implementations26 Sep 2023 Haotian Li, Lingzhi Wang, Yuliang Wei, Richard Yi Da Xu, Bailing Wang

Knowledge graph completion is a task that revolves around filling in missing triples based on the information available in a knowledge graph.

Knowledge Graph Completion Link Prediction +1

Delta-LoRA: Fine-Tuning High-Rank Parameters with the Delta of Low-Rank Matrices

no code implementations5 Sep 2023 Bojia Zi, Xianbiao Qi, Lingzhi Wang, Jianan Wang, Kam-Fai Wong, Lei Zhang

In this paper, we present Delta-LoRA, which is a novel parameter-efficient approach to fine-tune large language models (LLMs).

KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment

1 code implementation11 May 2023 Lingzhi Wang, Tong Chen, Wei Yuan, Xingshan Zeng, Kam-Fai Wong, Hongzhi Yin

Recent legislation of the "right to be forgotten" has led to the interest in machine unlearning, where the learned models are endowed with the function to forget information about specific training instances as if they have never existed in the training set.

Machine Unlearning Response Generation

Strategize Before Teaching: A Conversational Tutoring System with Pedagogy Self-Distillation

no code implementations27 Feb 2023 Lingzhi Wang, Mrinmaya Sachan, Xingshan Zeng, Kam-Fai Wong

Conversational tutoring systems (CTSs) aim to help students master educational material with natural language interaction in the form of a dialog.

Response Generation

Opportunities and Challenges in Neural Dialog Tutoring

1 code implementation24 Jan 2023 Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan

Designing dialog tutors has been challenging as it involves modeling the diverse and complex pedagogical strategies employed by human tutors.

Improving Conversational Recommender System via Contextual and Time-Aware Modeling with Less Domain-Specific Knowledge

no code implementations23 Sep 2022 Lingzhi Wang, Shafiq Joty, Wei Gao, Xingshan Zeng, Kam-Fai Wong

In addition to conducting experiments on a popular dataset (ReDial), we also include a multi-domain dataset (OpenDialKG) to show the effectiveness of our model.

Recommendation Systems

Salt and pepper noise removal method based on stationary Framelet transform with non-convex sparsity regularization

no code implementations18 Oct 2021 Yingpin Chen, Yuming Huang, Lingzhi Wang, Huiying Huang, Jianhua Song, Chaoqun Yu, Yanping Xu

For example, the noise location information is often ignored and the sparsity of the salt and pepper noise is often described by L1 norm, which cannot illustrate the sparse variables clearly.

Salt-And-Pepper Noise Removal

RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models

no code implementations14 Oct 2021 Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang

Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work.

Dialogue Generation Language Modelling +1

Quotation Recommendation and Interpretation Based on Transformation from Queries to Quotations

1 code implementation ACL 2021 Lingzhi Wang, Xingshan Zeng, Kam-Fai Wong

To help individuals express themselves better, quotation recommendation is receiving growing attention.

Cannot find the paper you are looking for? You can Submit a new open access paper.