no code implementations • 22 Feb 2024 • Xianming Li, Zongxi Li, Jing Li, Haoran Xie, Qing Li
The experimental results demonstrate the effectiveness of our proposed model in dynamically supporting different embedding sizes and Transformer layers, allowing it to be highly adaptable to various scenarios.
no code implementations • 28 Nov 2023 • Xinhong Chen, Zongxi Li, YaoWei Wang, Haoran Xie, JianPing Wang, Qing Li
To highlight the context in such special causal relationships, we propose a new task to determine whether or not an input pair of emotion and cause has a valid causal relationship under different contexts and extract the specific context clauses that participate in the causal relationship.
2 code implementations • 2 Oct 2023 • Zongxi Li, Xianming Li, Yuzhang Liu, Haoran Xie, Jing Li, Fu-lee Wang, Qing Li, Xiaoqin Zhong
We evaluate this approach with Label Supervised LLaMA (LS-LLaMA), based on LLaMA-2-7B, a relatively small-scale LLM, and can be finetuned on a single GeForce RTX4090 GPU.
Ranked #1 on Named Entity Recognition (NER) on CoNLL03 (F1 (micro) metric)
1 code implementation • 12 Jun 2023 • Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, Qing Li
Revisiting the self-attention mechanism and the recurrent structure, this paper proposes a novel long-document encoding model, Recurrent Attention Network (RAN), to enable the recurrent operation of self-attention.
no code implementations • 22 Feb 2020 • Xianming Li, Zongxi Li, Yingbin Zhao, Haoran Xie, Qing Li
The dominant text classification studies focus on training classifiers using textual instances only or introducing external knowledge (e. g., hand-craft features and domain expert knowledge).