1 code implementation • 13 Aug 2024 • Yuxiang Zheng, Shichao Sun, Lin Qiu, Dongyu Ru, Cheng Jiayang, Xuefeng Li, Jifan Lin, Binjie Wang, Yun Luo, Renjie Pan, Yang Xu, Qingkai Min, Zizhao Zhang, Yiwen Wang, Wenjie Li, PengFei Liu
The rapid growth of scientific literature imposes significant challenges for researchers endeavoring to stay updated with the latest advancements in their fields and delve into new areas.
2 code implementations • 23 May 2024 • Xiangkun Hu, Dongyu Ru, Lin Qiu, Qipeng Guo, Tianhang Zhang, Yang Xu, Yun Luo, PengFei Liu, Yue Zhang, Zheng Zhang
In RefChecker, an extractor generates claim-triplets from a response, which are then evaluated by a checker against a reference.
1 code implementation • 18 Apr 2024 • Fang Guo, Wenyu Li, Honglei Zhuang, Yun Luo, Yafu Li, Qi Zhu, Le Yan, Yue Zhang
The most recent pointwise Large Language Model (LLM) rankers have achieved remarkable ranking results.
1 code implementation • 21 Feb 2024 • Jianhao Yan, Yun Luo, Yue Zhang
The application scope of large language models (LLMs) is increasingly expanding.
1 code implementation • 9 Oct 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie zhou, Yue Zhang
Active learning (AL), which aims to construct an effective training set by iteratively curating the most formative unlabeled data for annotation, has been widely used in low-resource tasks.
1 code implementation • 8 Oct 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Jie zhou, Yue Zhang
However, we observe that merely concatenating sentences in a contextual window does not fully utilize contextual information and can sometimes lead to excessive attention on less informative sentences.
1 code implementation • 17 Aug 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang
Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge.
1 code implementation • 17 Jul 2023 • Zihan Liu, Jiaqi Wang, Yun Luo, Shuang Zhao, Wenbin Li, Stan Z. Li
In recent years, there has been an explosion of research on the application of deep learning to the prediction of various peptide properties, due to the significant development and market potential of peptides.
no code implementations • 20 May 2023 • Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie zhou, Yue Zhang
It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion.
no code implementations • 10 May 2023 • Yun Luo, Zhen Yang, Xuefeng Bai, Fandong Meng, Jie zhou, Yue Zhang
Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete effect is still unclear.
1 code implementation • 29 Mar 2023 • Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, Stan Z. Li
It has become cognitive inertia to employ cross-entropy loss function in classification related tasks.
1 code implementation • 8 Feb 2023 • Yun Luo, Zihan Liu, Stan Z. Li, Yue Zhang
(Dis)agreement detection aims to identify the authors' attitudes or positions (\textit{{agree, disagree, neutral}}) towards a specific text.
no code implementations • 26 Aug 2022 • Zihan Liu, Ge Wang, Yun Luo, Stan Z. Li
To address this issue, we propose a novel surrogate model with multi-level propagation that preserves the node dissimilarity information.
1 code implementation • COLING 2022 • Yun Luo, Fang Guo, Zihan Liu, Yue Zhang
Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data.
1 code implementation • COLING 2022 • Yun Luo, Zihan Liu, Yuefeng Shi, Stan Z Li, Yue Zhang
Meanwhile, ablation studies prove the significance of each module in our model.
1 code implementation • 7 Aug 2022 • Zihan Liu, Yun Luo, Lirong Wu, Siyuan Li, Zicheng Liu, Stan Z. Li
These errors arise from rough gradient usage due to the discreteness of the graph structure and from the unreliability in the meta-gradient on the graph structure.
no code implementations • 14 Apr 2022 • Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, Yue Zhang
Since previous studies on open-domain targeted sentiment analysis are limited in dataset domain variety and sentence level, we propose a novel dataset consisting of 6, 013 human-labeled data to extend the data domains in topics of interest and document level.
no code implementations • 20 Oct 2021 • Zihan Liu, Yun Luo, Zelin Zang, Stan Z. Li
Gray-box graph attacks aim at disrupting the performance of the victim model by using inconspicuous attacks with limited knowledge of the victim model.
no code implementations • 29 Sep 2021 • Yun Luo, Gengchen Wei, Bao-liang Lu
Usually, the DA methods give relatively promising results than the DG methods but require additional computation resources each time a new subject comes.
no code implementations • 4 Jun 2020 • Yun Luo, Li-Zhen Zhu, Zi-Yu Wan, Bao-liang Lu
Then, we augment the original training datasets with a different number of generated realistic-like EEG data.
1 code implementation • 17 Jan 2020 • Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, YuanYuan Chen, Lican Feng, Tianjian Chen, Han Yu, Qiang Yang
Federated learning (FL) is a promising approach to resolve this challenge.
2 code implementations • 14 Oct 2019 • Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yun-Feng Huang, Yang Liu, Qiang Yang
Federated learning is a new machine learning paradigm which allows data parties to build machine learning models collaboratively while keeping their data secure and private.