no code implementations • COLING 2022 • Yu Xia, Wenbin Jiang, Yajuan Lyu, Sujian Li
Existing works are based on end-to-end neural models which do not explicitly model the intermediate states and lack interpretability for the parsing process.
no code implementations • Findings (ACL) 2022 • Yu Xia, Quan Wang, Yajuan Lyu, Yong Zhu, Wenhao Wu, Sujian Li, Dai Dai
However, the existing method depends on the relevance between tasks and is prone to inter-type confusion. In this paper, we propose a novel two-stage framework Learn-and-Review (L&R) for continual NER under the type-incremental setting to alleviate the above issues. Specifically, for the learning stage, we distill the old knowledge from teacher to a student on the current dataset.
Continual Named Entity Recognition named-entity-recognition +2
no code implementations • 11 Mar 2024 • Yu Xia, Fang Kong, Tong Yu, Liya Guo, Ryan A. Rossi, Sungchul Kim, Shuai Li
In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to finetuning and efficiently balances exploration and exploitation in model selection.
1 code implementation • 21 Dec 2023 • Yu Xia, Ali Arian, Sriram Narayanamoorthy, Joshua Mabry
Significant research effort has been devoted in recent years to developing personalized pricing, promotions, and product recommendation algorithms that can leverage rich customer data to learn and earn.
no code implementations • 3 Sep 2023 • Yuanyuan Guo, Yu Xia, Rui Wang, Rongcheng Duan, Lu Li, Jiangmeng Li
Orthogonal to homogeneous graphs, the types of nodes and edges in heterogeneous graphs are diverse so that specialized graph contrastive learning methods are required.
1 code implementation • 7 Jun 2023 • Shudi Hou, Yu Xia, Muhao Chen, Sujian Li
Traditional text classification typically categorizes texts into pre-defined coarse-grained classes, from which the produced models cannot handle the real-world scenario where finer categories emerge periodically for accurate services.
no code implementations • 26 Apr 2023 • Shuai Li, Zhao Song, Yu Xia, Tong Yu, Tianyi Zhou
Large language models (LLMs) are known for their exceptional performance in natural language processing, making them highly effective in many human life-related or even job-related tasks.
1 code implementation • 20 Mar 2023 • Hongbo Wang, Weimin Xiong, YiFan Song, Dawei Zhu, Yu Xia, Sujian Li
Joint entity and relation extraction (JERE) is one of the most important tasks in information extraction.
1 code implementation • 25 Oct 2022 • Aaron Mueller, Yu Xia, Tal Linzen
However, much of this analysis has focused on monolingual models, and analyses of multilingual models have employed correlational methods that are confounded by the choice of probing tasks.
no code implementations • NAACL 2022 • Xiangyang Li, Xiang Long, Yu Xia, Sujian Li
Text style transfer (TST) without parallel data has achieved some practical success.
1 code implementation • 7 Jan 2021 • Xiangyang Li, Yu Xia, Xiang Long, Zheng Li, Sujian Li
In this paper, we describe our system for the AAAI 2021 shared task of COVID-19 Fake News Detection in English, where we achieved the 3rd position with the weighted F1 score of 0. 9859 on the test set.
Ranked #1 on Fake News Detection on Grover-Mega
no code implementations • 18 Dec 2019 • Jiawei Long, Yu Xia
With ongoing developments and innovations in single-cell RNA sequencing methods, advancements in sequencing performance could empower significant discoveries as well as new emerging possibilities to address biological and medical investigations.
no code implementations • 28 Aug 2018 • Guodong Xu, Yu Xia, Hui Ji
Data clustering is a fundamental problem with a wide range of applications.
no code implementations • 9 Aug 2014 • Konstantin Voevodski, Maria-Florina Balcan, Heiko Roglin, Shang-Hua Teng, Yu Xia
Given a point set S and an unknown metric d on S, we study the problem of efficiently partitioning S into k clusters while querying few distances between the points.
no code implementations • CVPR 2014 • Xin Geng, Yu Xia
Accurate ground truth pose is essential to the training of most existing head pose estimation algorithms.