Search Results for author: Guanting Dong

Found 29 papers, 17 papers with code

Spatial Hierarchy Aware Residual Pyramid Network for Time-of-Flight Depth Denoising

1 code implementation ECCV 2020 Guanting Dong, Yueyi Zhang, Zhiwei Xiong

In this paper, we propose a Spatial Hierarchy Aware Residual Pyramid Network, called SHARP-Net, to remove the depth noise by fully exploiting the geometry information of the scene on different scales.

Denoising

Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-training for Noisy Slot Filling Task

no code implementations22 Feb 2024 Jinxu Zhao, Guanting Dong, Yueyan Qiu, Tingfeng Hui, Xiaoshuai Song, Daichi Guo, Weiran Xu

In this study, we address the challenges posed by input perturbations in slot filling by proposing Noise-BERT, a unified Perturbation-Robust Framework with Noise Alignment Pre-training.

Adversarial Attack Contrastive Learning +5

PreAct: Predicting Future in ReAct Enhances Agent's Planning Ability

1 code implementation18 Feb 2024 Dayuan Fu, Jianzhao Huang, Siyuan Lu, Guanting Dong, Yejie Wang, Keqing He, Weiran Xu

Addressing the discrepancies between predictions and actual outcomes often aids individuals in expanding their thought processes and engaging in reflection, thereby facilitating reasoning in the correct direction.

Language Modelling Large Language Model

Knowledge Editing on Black-box Large Language Models

1 code implementation13 Feb 2024 Xiaoshuai Song, Zhengyang Wang, Keqing He, Guanting Dong, Yutao Mou, Jinxu Zhao, Weiran Xu

Knowledge editing (KE) aims to efficiently and precisely modify the behavior of large language models (LLMs) to update specific knowledge without negatively influencing other knowledge.

knowledge editing

OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models

1 code implementation25 Oct 2023 Mingfeng Xue, Dayiheng Liu, Kexin Yang, Guanting Dong, Wenqiang Lei, Zheng Yuan, Chang Zhou, Jingren Zhou

Furthermore, we assemble three test sets for comprehensive evaluation, an occu-test set covering 25 occupational categories, an estate set focusing on real estate, and an occu-quora set containing real-world questions from Quora.

Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT

1 code implementation16 Oct 2023 Xiaoshuai Song, Keqing He, Pei Wang, Guanting Dong, Yutao Mou, Jingang Wang, Yunsen Xian, Xunliang Cai, Weiran Xu

The tasks of out-of-domain (OOD) intent discovery and generalized intent discovery (GID) aim to extend a closed intent classifier to open-world intent sets, which is crucial to task-oriented dialogue (TOD) systems.

In-Context Learning Intent Discovery

DemoSG: Demonstration-enhanced Schema-guided Generation for Low-resource Event Extraction

no code implementations16 Oct 2023 Gang Zhao, Xiaocheng Gong, Xinjie Yang, Guanting Dong, Shudong Lu, Si Li

Most current Event Extraction (EE) methods focus on the high-resource scenario, which requires a large amount of annotated data and can hardly be applied to low-resource domains.

Domain Adaptation Event Extraction +2

Type-aware Decoding via Explicitly Aggregating Event Information for Document-level Event Extraction

no code implementations16 Oct 2023 Gang Zhao, Yidong Shi, Shudong Lu, Xinjie Yang, Guanting Dong, Jian Xu, Xiaocheng Gong, Si Li

Although previous methods attempt to address these challenges, they overlook the interference of event-unrelated sentences during event detection and neglect the mutual interference of different event roles during argument extraction.

Document-level Event Extraction Event Detection +1

Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking

1 code implementation16 Oct 2023 Yuxiang Wu, Guanting Dong, Weiran Xu

Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring and annotating task-oriented dialogues, which can be time-consuming and costly.

Dialogue State Tracking In-Context Learning +3

ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models

1 code implementation13 Oct 2023 Haoran Luo, Haihong E, Zichen Tang, Shiyao Peng, Yikai Guo, Wentai Zhang, Chenghao Ma, Guanting Dong, Meina Song, Wei Lin

Knowledge Base Question Answering (KBQA) aims to derive answers to natural language questions over large-scale knowledge bases (KBs), which are generally divided into two research components: knowledge retrieval and semantic parsing.

Knowledge Base Question Answering Knowledge Graphs +2

Revisit Input Perturbation Problems for LLMs: A Unified Robustness Evaluation Framework for Noisy Slot Filling Task

1 code implementation10 Oct 2023 Guanting Dong, Jinxu Zhao, Tingfeng Hui, Daichi Guo, Wenlong Wan, Boqi Feng, Yueyan Qiu, Zhuoma Gongque, Keqing He, Zechen Wang, Weiran Xu

To address these challenges, we propose a unified robustness evaluation framework based on the slot-filling task to systematically evaluate the dialogue understanding capability of LLMs in diverse input perturbation scenarios.

Data Augmentation Dialogue Understanding +3

Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization

1 code implementation9 Oct 2023 Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, Chang Zhou

In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?

Ranked #50 on Math Word Problem Solving on MATH (using extra training data)

Arithmetic Reasoning Data Augmentation +3

How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition

2 code implementations9 Oct 2023 Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, Jingren Zhou

We propose four intriguing research questions to explore the association between model performance and various factors including data amount, composition ratio, model size and SFT strategies.

Code Generation Instruction Following +2

Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations

no code implementations5 Oct 2023 Jiachi Liu, LiWen Wang, Guanting Dong, Xiaoshuai Song, Zechen Wang, Zhengyang Wang, Shanglin Lei, Jinzheng Zhao, Keqing He, Bo Xiao, Weiran Xu

The proposed dataset contains five types of human-annotated noise, and all those noises are exactly existed in real extensive robust-training methods of slot filling into the proposed framework.

slot-filling Slot Filling

InstructERC: Reforming Emotion Recognition in Conversation with a Retrieval Multi-task LLMs Framework

1 code implementation21 Sep 2023 Shanglin Lei, Guanting Dong, XiaoPing Wang, Keheng Wang, Sirui Wang

The field of emotion recognition of conversation (ERC) has been focusing on separating sentence feature encoding and context modeling, lacking exploration in generative paradigms based on unified designs.

Emotion Recognition in Conversation Retrieval +4

Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware Pre-training for KBQA

1 code implementation28 Aug 2023 Guanting Dong, Rumei Li, Sirui Wang, Yupeng Zhang, Yunsen Xian, Weiran Xu

Knowledge Base Question Answering (KBQA) aims to answer natural language questions with factual information such as entities and relations in KBs.

Knowledge Base Question Answering Retrieval

Scaling Relationship on Learning Mathematical Reasoning with Large Language Models

1 code implementation3 Aug 2023 Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, Jingren Zhou

We find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs.

Ranked #100 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +1

Generative Zero-Shot Prompt Learning for Cross-Domain Slot Filling with Inverse Prompting

1 code implementation6 Jul 2023 Xuefeng Li, LiWen Wang, Guanting Dong, Keqing He, Jinzheng Zhao, Hao Lei, Jiachi Liu, Weiran Xu

Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain.

slot-filling Slot Filling

Revisit Out-Of-Vocabulary Problem for Slot Filling: A Unified Contrastive Frameword with Multi-level Data Augmentations

no code implementations27 Feb 2023 Daichi Guo, Guanting Dong, Dayuan Fu, Yuxiang Wu, Chen Zeng, Tingfeng Hui, LiWen Wang, Xuefeng Li, Zechen Wang, Keqing He, Xinyue Cui, Weiran Xu

In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems.

Contrastive Learning slot-filling +1

A Prototypical Semantic Decoupling Method via Joint Contrastive Learning for Few-Shot Name Entity Recognition

no code implementations27 Feb 2023 Guanting Dong, Zechen Wang, LiWen Wang, Daichi Guo, Dayuan Fu, Yuxiang Wu, Chen Zeng, Xuefeng Li, Tingfeng Hui, Keqing He, Xinyue Cui, QiXiang Gao, Weiran Xu

Specifically, we decouple class-specific prototypes and contextual semantic prototypes by two masking strategies to lead the model to focus on two different semantic information for inference.

Contrastive Learning few-shot-ner +4

Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems

1 code implementation17 Oct 2022 Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, Weiran Xu

Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals.

A Robust Contrastive Alignment Method For Multi-Domain Text Classification

no code implementations26 Apr 2022 Xuefeng Li, Hao Lei, LiWen Wang, Guanting Dong, Jinzheng Zhao, Jiachi Liu, Weiran Xu, Chunyun Zhang

In this paper, we propose a robust contrastive alignment method to align text classification features of various domains in the same feature space by supervised contrastive learning.

Contrastive Learning text-classification +1

Exploiting Rigidity Constraints for LiDAR Scene Flow Estimation

no code implementations CVPR 2022 Guanting Dong, Yueyi Zhang, HanLin Li, Xiaoyan Sun, Zhiwei Xiong

Previous LiDAR scene flow estimation methods, especially recurrent neural networks, usually suffer from structure distortion in challenging cases, such as sparse reflection and motion occlusions.

Autonomous Driving Scene Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.