no code implementations • ACL 2022 • Zhuoran Li, Chunming Hu, Xiaohui Guo, Junfan Chen, Wenyi Qin, Richong Zhang
In this study, based on the knowledge distillation framework and multi-task learning, we introduce the similarity metric model as an auxiliary task to improve the cross-lingual NER performance on the target domain.
no code implementations • COLING 2022 • Ling Ge, Chunming Hu, Guanghui Ma, Junshuang Wu, Junfan Chen, Jihong Liu, Hong Zhang, Wenyi Qin, Richong Zhang
Enhancing the interpretability of text classification models can help increase the reliability of these models in real-world applications.
no code implementations • 27 Jan 2025 • Zhiyuan Fu, Junfan Chen, Hongyu Sun, Ting Yang, Ruidong Li, Yuqing Zhang
Using large language models (LLMs) integration platforms without transparency about which LLM is being invoked can lead to potential security risks.
no code implementations • 13 Sep 2024 • Yang Li, Dengyu Zhang, Junfan Chen, Ying Wen, Qingrui Zhang, Shaoshuai Mou, Wei Pan
In this paper, we extend the scope of ZSC research to the multi-drone cooperative pursuit scenario, exploring how to construct a drone agent capable of coordinating with multiple unseen partners to capture multiple evaders.
1 code implementation • 23 Jul 2024 • Yani Huang, Xuefeng Zhang, Richong Zhang, Junfan Chen, Jaein Kim
Multi-Modal Entity Alignment aims to discover identical entities across heterogeneous knowledge graphs.
no code implementations • 19 Jun 2024 • Zhuoran Li, Chunming Hu, Junfan Chen, Zhijun Chen, Xiaohui Guo, Richong Zhang
Specifically, we first design a difficulty measurer to measure the impact of replacing each word in a sentence based on the word relevance score.
1 code implementation • 16 May 2023 • Junfan Chen, Richong Zhang, Zheyan Luo, Chunming Hu, Yongyi Mao
Data augmentation is widely used in text classification, especially in the low-resource regime where a few examples for each class are available during training.
1 code implementation • 16 May 2023 • Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu
Few-shot text classification has recently been promoted by the meta-learning paradigm which aims to identify target classes with knowledge transferred from source classes with sets of small tasks named episodes.
1 code implementation • 16 Apr 2022 • Mingchen Li, Junfan Chen, Samuel Mensah, Nikolaos Aletras, Xiulong Yang, Yang Ye
Thus, in this paper, we propose a Hierarchical N-Gram framework for Zero-Shot Link Prediction (HNZSLP), which considers the dependencies among character n-grams of the relation surface name for ZSLP.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu
Existing DST models either ignore temporal feature dependencies across dialogue turns or fail to explicitly model temporal state dependencies in a dialogue.
1 code implementation • EMNLP 2020 • Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu
In this study, we argue that the incorporation of these dependencies is crucial for the design of MDST and propose Parallel Interactive Networks (PIN) to model these dependencies.
Dialogue State Tracking
Multi-domain Dialogue State Tracking
1 code implementation • IJCNLP 2019 • Junfan Chen, Richong Zhang, Yongyi Mao, Hongyu Guo, Jie Xu
Distant supervision for relation extraction enables one to effectively acquire structured relations out of very large text corpora with less human efforts.