no code implementations • SemEval (NAACL) 2022 • Zhihao Ruan, Xiaolong Hou, Lianxin Jiang
This paper describes our system used in the SemEval-2022 Task 09: R2VQ - Competence-based Multimodal Question Answering.
no code implementations • 1 Nov 2022 • Dou Hu, Xiaolong Hou, Xiyang Du, Mengyuan Zhou, Lianxin Jiang, Yang Mo, Xiaofeng Shi
Pre-trained language models have achieved promising performance on general benchmarks, but underperform when migrated to a specific domain.
1 code implementation • 4 Mar 2022 • Dou Hu, Xiaolong Hou, Lingwei Wei, Lianxin Jiang, Yang Mo
For multimodal ERC, it is vital to understand context and fuse modality information in conversations.
Ranked #23 on Emotion Recognition in Conversation on IEMOCAP
no code implementations • SEMEVAL 2021 • Gang Rao, Maochang Li, Xiaolong Hou, Lianxin Jiang, Yang Mo, Jianping Shen
In this paper we propose a contextual attention based model with two-stage fine-tune training using RoBERTa.
no code implementations • SEMEVAL 2021 • Xiaolong Hou, Junsong Ren, Gang Rao, Lianxin Lian, Zhihao Ruan, Yang Mo, Jianping Shen
The objective of subtask 2 of SemEval-2021 Task 6 is to identify techniques used together with the span(s) of text covered by each technique.
no code implementations • SEMEVAL 2020 • Chenyang Guo, Xiaolong Hou, Junsong Ren, Lianxin Jiang, Yang Mo, Haiqin Yang, Jianping Shen
This paper describes the model we apply in the SemEval-2020 Task 10.