no code implementations • ACL 2022 • Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, Emine Yilmaz
Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains.
Dialogue State Tracking Multi-domain Dialogue State Tracking +1
1 code implementation • 23 Oct 2023 • Zihan Zhang, Meng Fang, Fanghua Ye, Ling Chen, Mohammad-Reza Namazi-Rad
Dialogue state tracking (DST) plays an important role in task-oriented dialogue systems.
no code implementations • 14 Dec 2023 • Shitong Sun, Fanghua Ye, Shaogang Gong
Composed image retrieval attempts to retrieve an image of interest from gallery images through a composed query of a reference image and its corresponding modified text.
no code implementations • 12 Feb 2024 • Jianhui Pang, Fanghua Ye, Derek F. Wong, Longyue Wang
Large language models (LLMs) predominantly employ decoder-only transformer architectures, necessitating the retention of keys/values information for historical tokens to provide contextual information and avoid redundant computation.
1 code implementation • 21 May 2023 • Fanghua Ye, Zhiyuan Hu, Emine Yilmaz
It assumes that the performance of a dialogue system can be measured by user satisfaction and uses an estimator to simulate users.
1 code implementation • 14 Jan 2021 • Shenghui Li, Edith Ngai, Fanghua Ye, Thiemo Voigt
In this paper, we address this challenge by proposing Auto-weighted Robust Federated Learning (arfl), a novel approach that jointly learns the global model and the weights of local updates to provide robustness against corrupted data sources.
1 code implementation • 15 Oct 2023 • Fanghua Ye, Meng Fang, Shenghui Li, Emine Yilmaz
Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency.
1 code implementation • 1 Jun 2020 • Fanghua Ye, Zhiwei Lin, Chuan Chen, Zibin Zheng, Hong Huang
The proliferation of Web services makes it difficult for users to select the most appropriate one among numerous functionally identical or similar service candidates.
1 code implementation • 22 Oct 2022 • Fanghua Ye, Xi Wang, Jie Huang, Shenghui Li, Samuel Stern, Emine Yilmaz
Experimental results demonstrate that all three schemes can achieve competitive performance.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Fanghua Ye, Jarana Manotumruksa, Emine Yilmaz
Semantic hashing is a powerful paradigm for representing texts as compact binary hash codes.
1 code implementation • 16 Jan 2024 • Jianhui Pang, Fanghua Ye, Longyue Wang, Dian Yu, Derek F. Wong, Shuming Shi, Zhaopeng Tu
This study revisits these challenges, offering insights into their ongoing relevance in the context of advanced Large Language Models (LLMs): domain mismatch, amount of parallel data, rare word prediction, translation of long sentences, attention model as word alignment, and sub-optimal beam search.
1 code implementation • Findings (ACL) 2022 • Fanghua Ye, Yue Feng, Emine Yilmaz
In this paper, instead of improving the annotation quality further, we propose a general framework, named ASSIST (lAbel noiSe-robuSt dIalogue State Tracking), to train DST models robustly from noisy labels.
1 code implementation • 22 Jan 2021 • Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, Emine Yilmaz
Then a stacked slot self-attention is applied on these features to learn the correlations among slots.
1 code implementation • SIGDIAL (ACL) 2022 • Fanghua Ye, Jarana Manotumruksa, Emine Yilmaz
The annotations in the training set remain unchanged (same as MultiWOZ 2. 1) to elicit robust and noise-resilient model training.
1 code implementation • 23 Jan 2024 • Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, Zhaopeng Tu
The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods.
2 code implementations • CIKM 2018 • Fanghua Ye, Chuan Chen, Zibin Zheng
Considering the complicated and diversified topology structures of real-world networks, it is highly possible that the mapping between the original network and the community membership space contains rather complex hierarchical information, which cannot be interpreted by classic shallow NMF-based approaches.
Ranked #1 on Node Classification on Wiki