Search Results for author: Yongrui Chen

Found 19 papers, 10 papers with code

Magic Mushroom: A Customizable Benchmark for Fine-grained Analysis of Retrieval Noise Erosion in RAG Systems

no code implementations4 Jun 2025 Yuxin Zhang, Yan Wang, Yongrui Chen, Shenyu Zhang, Xinbang Dai, Sheng Bi, Guilin Qi

Building on this, we introduce Magic Mushroom, a benchmark for replicating "magic mushroom" noise: contexts that appear relevant on the surface but covertly mislead RAG systems.

Denoising Hallucination +3

Large Language Models Meet Knowledge Graphs for Question Answering: Synthesis and Opportunities

1 code implementation26 May 2025 Chuangtao Ma, Yongrui Chen, Tianxing Wu, Arijit Khan, Haofen Wang

We systematically survey state-of-the-art advances in synthesizing LLMs and KGs for QA and compare and analyze these approaches in terms of strength, limitations, and KG requirements.

Knowledge Graphs Natural Language Understanding +2

Pandora: A Code-Driven Large Language Model Agent for Unified Reasoning Across Diverse Structured Knowledge

no code implementations17 Apr 2025 Yongrui Chen, Junhao He, Linbo Fu, Shenyu Zhang, Rihui Jin, Xinbang Dai, Jiaqi Li, Dehai Min, Nan Hu, Yuxin Zhang, Guilin Qi, Yi Huang, Tongtong Wu

Unified Structured Knowledge Reasoning (USKR) aims to answer natural language questions (NLQs) by using structured sources such as tables, databases, and knowledge graphs in a unified way.

Knowledge Graphs Language Modeling +2

Harnessing Diverse Perspectives: A Multi-Agent Framework for Enhanced Error Detection in Knowledge Graphs

1 code implementation27 Jan 2025 Yu Li, Yi Huang, Guilin Qi, Junlan Feng, Nan Hu, Songlin Zhai, Haohan Xue, Yongrui Chen, Ruoyan Shen, Tongtong Wu

For specific industrial scenarios, our framework can facilitate the training of specialized agents using domain-specific knowledge graphs for error detection, which highlights the potential industrial application value of our framework.

Decision Making Knowledge Graphs

Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models

no code implementations21 May 2024 Jiaqi Li, Qianshan Wei, Chuanyi Zhang, Guilin Qi, Miaozeng Du, Yongrui Chen, Sheng Bi, Fan Liu

Alongside our method, we establish MMUBench, a new benchmark for MU in MLLMs and introduce a collection of metrics for its evaluation.

Machine Unlearning

HeGTa: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding

no code implementations28 Mar 2024 Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min, Sheng Bi

Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks. It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives. We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.

Language Modeling Language Modelling +1

MATEval: A Multi-Agent Discussion Framework for Advancing Open-Ended Text Evaluation

1 code implementation28 Mar 2024 Yu Li, Shenyu Zhang, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi, Dehai Min

Experimental results show that our framework outperforms existing open-ended text evaluation methods and achieves the highest correlation with human evaluation, which confirms the effectiveness and advancement of our framework in addressing the uncertainties and instabilities in evaluating LLMs-generated text.

DEE: Dual-stage Explainable Evaluation Method for Text Generation

no code implementations18 Mar 2024 Shenyu Zhang, Yu Li, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi

Automatic methods for evaluating machine-generated texts hold significant importance due to the expanding applications of generative systems.

Diagnostic Hallucination +1

MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge Editing

no code implementations18 Feb 2024 Jiaqi Li, Miaozeng Du, Chuanyi Zhang, Yongrui Chen, Nan Hu, Guilin Qi, Haiyun Jiang, Siyuan Cheng, Bozhong Tian

Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs).

knowledge editing

DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping

1 code implementation11 Sep 2023 Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, Guilin Qi

In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10\% relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data.

Hallucination Instruction Following

Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family

2 code implementations14 Mar 2023 Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi

ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.

Knowledge Base Question Answering Language Modeling +4

Learn from Yesterday: A Semi-Supervised Continual Learning Method for Supervision-Limited Text-to-SQL Task Streams

1 code implementation21 Nov 2022 Yongrui Chen, Xinnan Guo, Tongtong Wu, Guilin Qi, Yang Li, Yang Dong

The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.

Continual Learning Text to SQL +1

Edge-Cloud Cooperation for DNN Inference via Reinforcement Learning and Supervised Learning

no code implementations11 Oct 2022 Tinghao Zhang, Zhijun Li, Yongrui Chen, Kwok-Yan Lam, Jun Zhao

A reinforcement learning (RL)-based DNN compression approach is used to generate the lightweight model suitable for the edge from the heavyweight model.

image-classification Image Classification +4

Leveraging Table Content for Zero-shot Text-to-SQL with Meta-Learning

1 code implementation12 Sep 2021 Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li

Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.

Meta-Learning Text to SQL +1

Edge-Cloud Collaborated Object Detection via Difficult-Case Discriminator

no code implementations29 Aug 2021 Zhiqiang Cao, Zhijun Li, Pan Heng, Yongrui Chen, Daqi Xie, Jie Liu

To address this challenge, we propose a small-big model framework that deploys a big model in the cloud and a small model on the edge devices.

Object object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.