no code implementations • CCL 2022 • Ruihua Qi, Jia Wei, Zhen Shao, Xu Guo, Heng Chen
“本文旨在解决领域情感词典构建任务中标注数据资源相对匮乏以及情感语义表示不充分问题, 通过多源数据领域差异计算联合权重, 融合先验情感知识和Fasttext词向量表示学习, 将情感语义知识映射到新的词向量空间, 从无标注数据中自动构建适应大数据多领域和多语言环境的领域情感词典。在中英文多领域公开数据集上的对比实验表明, 与情感词典方法和预训练词向量方法相比, 本文提出的多源知识融合的领域情感词典表示学习方法在实验数据集上的分类正确率均有明显提升, 并在多种算法、多语言、多领域和多数据集上具有较好的鲁棒性。本文还通过消融实验验证了所提出模型的各个模块在提升情感分类效果中的作用。”
no code implementations • 15 Jun 2023 • Zilin Du, Yunxin Li, Xu Guo, Yidan Sun, Boyang Li
Contemporary news reporting increasingly features multimedia content, motivating research on multimedia event extraction.
no code implementations • 6 Nov 2022 • Xu Guo, Han Yu
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs).
1 code implementation • 6 Oct 2022 • Xu Guo, Boyang Li, Han Yu
Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks.
no code implementations • 27 Aug 2022 • Jiahui Chen, Xu Guo, Wensheng Gan, Shichen Wan, Philip S. Yu
Compared with traditional utility mining, OSUM can find more practical and meaningful patterns in real-life applications.
no code implementations • 25 Jun 2021 • Tianle Yue, Hang Yang, Zongliang Du, Chang Liu, Khalil I. Elkhodary, Shan Tang, Xu Guo
During offline training, a mapping function is built between high and low resolution representations of a given design domain.
1 code implementation • NAACL 2021 • Xu Guo, Boyang Li, Han Yu, Chunyan Miao
The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality.
no code implementations • 3 Dec 2020 • Xu Guo, Han Yu, Boyang Li, Hao Wang, Pengwei Xing, Siwei Feng, Zaiqing Nie, Chunyan Miao
In this paper, we propose the FedHumor approach for the recognition of humorous content in a personalized manner through Federated Learning (FL).
1 code implementation • 17 Oct 2020 • Guanghui Zhu, Zhuoer Xu, Xu Guo, Chunfeng Yuan, Yihua Huang
Extensive experiments on classification and regression datasets demonstrate that DIFER can significantly improve the performance of various machine learning algorithms and outperform current state-of-the-art AutoFE methods in terms of both efficiency and performance.
1 code implementation • 27 Feb 2020 • Lilun Du, Xu Guo, Wenguang Sun, Changliang Zou
We develop a new class of distribution--free multiple testing rules for false discovery rate (FDR) control under general dependence.
Methodology Statistics Theory Statistics Theory
no code implementations • 10 Apr 2019 • Bin Liu, Qian Guo, Shucai Li, Benchao Liu, Yuxiao Ren, Yonghao Pang, Xu Guo, Lanbo Liu, Peng Jiang
According to the comprehensive qualitative analysis and quantitative comparison, ERSInvNet with tier feature map, smooth constraints, and depth weighting function together achieve the best performance.