no code implementations • CCL 2022 • Ruihua Qi, Jia Wei, Zhen Shao, Xu Guo, Heng Chen
“本文旨在解决领域情感词典构建任务中标注数据资源相对匮乏以及情感语义表示不充分问题, 通过多源数据领域差异计算联合权重, 融合先验情感知识和Fasttext词向量表示学习, 将情感语义知识映射到新的词向量空间, 从无标注数据中自动构建适应大数据多领域和多语言环境的领域情感词典。在中英文多领域公开数据集上的对比实验表明, 与情感词典方法和预训练词向量方法相比, 本文提出的多源知识融合的领域情感词典表示学习方法在实验数据集上的分类正确率均有明显提升, 并在多种算法、多语言、多领域和多数据集上具有较好的鲁棒性。本文还通过消融实验验证了所提出模型的各个模块在提升情感分类效果中的作用。”
no code implementations • 8 Oct 2024 • Xu Guo, Zilin Du, Boyang Li, Chunyan Miao
A major limitation of prompt tuning is its dependence on large labeled training datasets.
1 code implementation • 6 Oct 2024 • Yige Xu, Xu Guo, Zhiwei Zeng, Chunyan Miao
Large language models (LLMs) have brought a great breakthrough to the natural language processing (NLP) community, while leading the challenge of handling concurrent customer queries due to their high throughput demands.
1 code implementation • 4 Jul 2024 • Yongjie Wang, Xiaoqi Qiu, Yu Yue, Xu Guo, Zhiwei Zeng, Yuhong Feng, Zhiqi Shen
Natural language counterfactual generation aims to minimally modify a given text such that the modified text will be classified into a different class.
1 code implementation • 9 Jun 2024 • Xiaoqi Qiu, Yongjie Wang, Xu Guo, Zhiwei Zeng, Yue Yu, Yuhong Feng, Chunyan Miao
Counterfactually Augmented Data (CAD) involves creating new data samples by applying minimal yet sufficient modifications to flip the label of existing data samples to other classes.
no code implementations • 11 May 2024 • Qing Wu, Xu Guo, Lixuan Chen, Yanyan Liu, Dongming He, Xudong Wang, Xueli Chen, Yifeng Zhang, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang
In this work, we propose Density neural representation (Diner), a novel unsupervised MAR method.
no code implementations • 15 Mar 2024 • Yongjie Wang, Tong Zhang, Xu Guo, Zhiqi Shen
Due to the lack of a rigorous definition of explainable AI (XAI), a plethora of research related to explainability, interpretability, and transparency has been developed to explain and analyze the model from various perspectives.
no code implementations • 7 Mar 2024 • Xu Guo, Yiqiang Chen
The recent surge in research focused on generating synthetic data from large language models (LLMs), especially for scenarios with limited data availability, marks a notable shift in Generative Artificial Intelligence (AI).
no code implementations • 5 Dec 2023 • Zilin Du, Haoxin Li, Xu Guo, Boyang Li
Comparing our method to direct training on synthetic data, we observed a significant improvement of 24. 06% F1 with synthetic text and 26. 42% F1 with synthetic images.
no code implementations • 15 Jun 2023 • Zilin Du, Yunxin Li, Xu Guo, Yidan Sun, Boyang Li
Contemporary news reporting increasingly features multimedia content, motivating research on multimedia event extraction.
no code implementations • 6 Nov 2022 • Xu Guo, Han Yu
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs).
1 code implementation • 6 Oct 2022 • Xu Guo, Boyang Li, Han Yu
Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks.
no code implementations • 27 Aug 2022 • Jiahui Chen, Xu Guo, Wensheng Gan, Shichen Wan, Philip S. Yu
Compared with traditional utility mining, OSUM can find more practical and meaningful patterns in real-life applications.
no code implementations • 25 Jun 2021 • Tianle Yue, Hang Yang, Zongliang Du, Chang Liu, Khalil I. Elkhodary, Shan Tang, Xu Guo
During offline training, a mapping function is built between high and low resolution representations of a given design domain.
1 code implementation • NAACL 2021 • Xu Guo, Boyang Li, Han Yu, Chunyan Miao
The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality.
no code implementations • 3 Dec 2020 • Xu Guo, Han Yu, Boyang Li, Hao Wang, Pengwei Xing, Siwei Feng, Zaiqing Nie, Chunyan Miao
In this paper, we propose the FedHumor approach for the recognition of humorous content in a personalized manner through Federated Learning (FL).
1 code implementation • 17 Oct 2020 • Guanghui Zhu, Zhuoer Xu, Xu Guo, Chunfeng Yuan, Yihua Huang
Extensive experiments on classification and regression datasets demonstrate that DIFER can significantly improve the performance of various machine learning algorithms and outperform current state-of-the-art AutoFE methods in terms of both efficiency and performance.
1 code implementation • 27 Feb 2020 • Lilun Du, Xu Guo, Wenguang Sun, Changliang Zou
We develop a new class of distribution--free multiple testing rules for false discovery rate (FDR) control under general dependence.
Methodology Statistics Theory Statistics Theory
no code implementations • 10 Apr 2019 • Bin Liu, Qian Guo, Shucai Li, Benchao Liu, Yuxiao Ren, Yonghao Pang, Xu Guo, Lanbo Liu, Peng Jiang
According to the comprehensive qualitative analysis and quantitative comparison, ERSInvNet with tier feature map, smooth constraints, and depth weighting function together achieve the best performance.