Search Results for author: Fanghua Ye

Found 25 papers, 16 papers with code

The Lighthouse of Language: Enhancing LLM Agents via Critique-Guided Improvement

no code implementations20 Mar 2025 Ruihan Yang, Fanghua Ye, Jian Li, Siyu Yuan, Yikai Zhang, Zhaopeng Tu, Xiaolong Li, Deqing Yang

In this work, we introduce Critique-Guided Improvement (CGI), a novel two-player framework, comprising an actor model that explores an environment and a critic model that generates detailed nature language feedback.

ParallelComp: Parallel Long-Context Compressor for Length Extrapolation

no code implementations20 Feb 2025 Jing Xiong, Jianghan Shen, Chuanyang Zheng, Zhongwei Wan, Chenyang Zhao, Chiwun Yang, Fanghua Ye, Hongxia Yang, Lingpeng Kong, Ngai Wong

To mitigate the attention sink issue, we propose an attention calibration strategy that reduces biases, ensuring more stable long-range attention.

4k 8k

PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning

no code implementations28 Nov 2024 Shenghui Li, Edith C. -H. Ngai, Fanghua Ye, Thiemo Voigt

This paper introduces a novel security threat to FedPEFT, termed PEFT-as-an-Attack (PaaA), which exposes how PEFT can be exploited as an attack vector to circumvent PLMs' safety alignment and generate harmful content in response to malicious prompts.

Federated Learning parameter-efficient fine-tuning +2

UncertaintyRAG: Span-Level Uncertainty Enhanced Long-Context Modeling for Retrieval-Augmented Generation

no code implementations3 Oct 2024 Zixuan Li, Jing Xiong, Fanghua Ye, Chuanyang Zheng, Xun Wu, Jianqiao Lu, Zhongwei Wan, Xiaodan Liang, Chengming Li, Zhenan Sun, Lingpeng Kong, Ngai Wong

We present UncertaintyRAG, a novel approach for long-context Retrieval-Augmented Generation (RAG) that utilizes Signal-to-Noise Ratio (SNR)-based span uncertainty to estimate similarity between text chunks.

Chunking Language Modeling +3

Unveiling In-Context Learning: A Coordinate System to Understand Its Working Mechanism

1 code implementation24 Jul 2024 Anhao Zhao, Fanghua Ye, Jinlan Fu, Xiaoyu Shen

Recent research presents two conflicting views on ICL: One emphasizes the impact of similar examples in the demonstrations, stressing the need for label correctness and more shots.

In-Context Learning

Synergizing Foundation Models and Federated Learning: A Survey

no code implementations18 Jun 2024 Shenghui Li, Fanghua Ye, Meng Fang, Jiaxu Zhao, Yun-Hin Chan, Edith C. -H. Ngai, Thiemo Voigt

The recent development of Foundation Models (FMs), represented by large language models, vision transformers, and multimodal models, has been making a significant impact on both academia and industry.

Federated Learning Survey

Anchor-based Large Language Models

1 code implementation12 Feb 2024 Jianhui Pang, Fanghua Ye, Derek Fai Wong, Xin He, Wanshun Chen, Longyue Wang

Large language models (LLMs) predominantly employ decoder-only transformer architectures, necessitating the retention of keys/values information for historical tokens to provide contextual information and avoid redundant computation.

Computational Efficiency Decoder +1

Benchmarking LLMs via Uncertainty Quantification

1 code implementation23 Jan 2024 Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, Zhaopeng Tu

The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods.

Benchmarking Uncertainty Quantification

Salute the Classic: Revisiting Challenges of Machine Translation in the Age of Large Language Models

1 code implementation16 Jan 2024 Jianhui Pang, Fanghua Ye, Longyue Wang, Dian Yu, Derek F. Wong, Shuming Shi, Zhaopeng Tu

This study revisits these challenges, offering insights into their ongoing relevance in the context of advanced Large Language Models (LLMs): domain mismatch, amount of parallel data, rare word prediction, translation of long sentences, attention model as word alignment, and sub-optimal beam search.

Machine Translation NMT +2

Training-free Zero-shot Composed Image Retrieval with Local Concept Reranking

no code implementations14 Dec 2023 Shitong Sun, Fanghua Ye, Shaogang Gong

Composed image retrieval attempts to retrieve an image of interest from gallery images through a composed query of a reference image and its corresponding modified text.

Image Retrieval Reranking +4

Enhancing Conversational Search: Large Language Model-Aided Informative Query Rewriting

1 code implementation15 Oct 2023 Fanghua Ye, Meng Fang, Shenghui Li, Emine Yilmaz

Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency.

Conversational Search Language Modeling +3

Modeling User Satisfaction Dynamics in Dialogue via Hawkes Process

1 code implementation21 May 2023 Fanghua Ye, Zhiyuan Hu, Emine Yilmaz

It assumes that the performance of a dialogue system can be measured by user satisfaction and uses an estimator to simulate users.

Dynamic Schema Graph Fusion Network for Multi-Domain Dialogue State Tracking

no code implementations ACL 2022 Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, Emine Yilmaz

Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains.

Decoder Dialogue State Tracking +2

ASSIST: Towards Label Noise-Robust Dialogue State Tracking

1 code implementation Findings (ACL) 2022 Fanghua Ye, Yue Feng, Emine Yilmaz

In this paper, instead of improving the annotation quality further, we propose a general framework, named ASSIST (lAbel noiSe-robuSt dIalogue State Tracking), to train DST models robustly from noisy labels.

Dialogue State Tracking

Slot Self-Attentive Dialogue State Tracking

1 code implementation22 Jan 2021 Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, Emine Yilmaz

Then a stacked slot self-attention is applied on these features to learn the correlations among slots.

Dialogue State Tracking Task-Oriented Dialogue Systems

Auto-weighted Robust Federated Learning with Corrupted Data Sources

1 code implementation14 Jan 2021 Shenghui Li, Edith Ngai, Fanghua Ye, Thiemo Voigt

In this paper, we address this challenge by proposing Auto-weighted Robust Federated Learning (arfl), a novel approach that jointly learns the global model and the weights of local updates to provide robustness against corrupted data sources.

Federated Learning Privacy Preserving

Outlier-Resilient Web Service QoS Prediction

1 code implementation1 Jun 2020 Fanghua Ye, Zhiwei Lin, Chuan Chen, Zibin Zheng, Hong Huang

The proliferation of Web services makes it difficult for users to select the most appropriate one among numerous functionally identical or similar service candidates.

Prediction

Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection

2 code implementations CIKM 2018 Fanghua Ye, Chuan Chen, Zibin Zheng

Considering the complicated and diversified topology structures of real-world networks, it is highly possible that the mapping between the original network and the community membership space contains rather complex hierarchical information, which cannot be interpreted by classic shallow NMF-based approaches.

Decoder Local Community Detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.