no code implementations • EMNLP 2020 • Xin Wu, Yi Cai, Yang Kai, Tao Wang, Qing Li
Meta-embedding learning, which combines complementary information in different word embeddings, have shown superior performances across different Natural Language Processing tasks.
1 code implementation • COLING 2022 • Feng Ge, Weizhao Li, Haopeng Ren, Yi Cai
To this end, we present a Chinese sticker-based multimodal dataset for the sentiment analysis task (CSMSA).
no code implementations • NAACL (BioNLP) 2021 • Liwen Xu, Yan Zhang, Lei Hong, Yi Cai, Szui Sung
In this article, we will describe our system for MEDIQA2021 shared tasks.
no code implementations • Findings (ACL) 2022 • Weizhao Li, Junsheng Kong, Ben Liao, Yi Cai
Chatbot models have achieved remarkable progress in recent years but tend to yield contradictory responses.
1 code implementation • 8 Dec 2024 • Jiali Chen, Xusen Hei, Yuqi Xue, Yuancheng Wei, Jiayuan Xie, Yi Cai, Qing Li
Large multimodal models (LMMs) have shown remarkable performance in the visual commonsense reasoning (VCR) task, which aims to answer a multiple-choice question based on visual commonsense within an image.
no code implementations • 25 Nov 2024 • Yi Cai, Arthur Zimek, Eirini Ntoutsi, Gerhard Wunder
Recent literature highlights the critical role of neighborhood construction in deriving model-agnostic explanations, with a growing trend toward deploying generative models to improve synthetic instance quality, especially for explaining text classifiers.
no code implementations • 20 Nov 2024 • Huan Huang, Hongliang Zhang, Jide Yuan, Luyao Sun, Yitian Wang, Weidong Mei, Boya Di, Yi Cai, Zhu Han
Based on the derived analysis, the omnidirectional jamming impact of the proposed DIOS-based FPJ implemented by a constant-amplitude IOS does not depend on either the quantization number or the stochastic distribution of the DIOS coefficients, while the conclusion does not hold on when a variable-amplitude IOS is used.
no code implementations • 18 Oct 2024 • Li Yuan, Yi Cai, Junsheng Huang
This method can effectively address the problem of insufficient information in the few-shot setting by guiding a large language model to generate supplementary background knowledge.
no code implementations • 3 Aug 2024 • Jintao Tan, Xize Cheng, Lingyu Xiong, Lei Zhu, Xiandong Li, Xianjia Wu, Kai Gong, Minglei Li, Yi Cai
Audio-driven talking head generation is a significant and challenging task applicable to various fields such as virtual avatars, film production, and online conferences.
no code implementations • 2 Jul 2024 • Jiexin Wang, Xitong Luo, Liuwen Cao, Hongkui He, Hailin Huang, Jiayuan Xie, Adam Jatowt, Yi Cai
CodeSecEval serves as the foundation for the automatic evaluation of code models in two crucial tasks: code generation and code repair, with a strong emphasis on security.
no code implementations • 4 Jun 2024 • Jiexin Wang, Adam Jatowt, Yi Cai
TAMLM is designed to enhance the understanding of temporal contexts and relations, DD integrates document timestamps as chronological markers, and TSER focuses on the temporal dynamics of "Person" entities, recognizing their inherent temporal significance.
no code implementations • 11 Apr 2024 • Huan Huang, Hongliang Zhang, Weidong Mei, Jun Li, Yi Cai, A. Lee Swindlehurst, Zhu Han
Moreover, a theoretical analysis is conducted to quantify the impact of DISCO jamming attacks.
1 code implementation • 11 Mar 2024 • Li Yuan, Yi Cai, Haopeng Ren, Jiexin Wang
LMPM incorporates an external memory structure to learn and store the latent representations of logical patterns, which aids in generating logically consistent conclusions.
no code implementations • 25 Oct 2023 • Jiexin Wang, Liuwen Cao, Xitong Luo, Zhiping Zhou, Jiayuan Xie, Adam Jatowt, Yi Cai
Moreover, our study identifies weaknesses in existing models' ability to repair vulnerable code, even when provided with vulnerability information.
1 code implementation • 1 Oct 2023 • Huan Huang, Lipeng Dai, Hongliang Zhang, Chongfu Zhang, Zhongxing Tian, Yi Cai, A. Lee Swindlehurst, Zhu Han
Unlike the extensive research on legitimate IRS-enhanced communications, in this article we present an adversarial IRS-based fully-passive jammer (FPJ).
no code implementations • 30 Aug 2023 • Huan Huang, Lipeng Dai, Hongliang Zhang, Zhongxing Tian, Yi Cai, Chongfu Zhang, A. Lee Swindlehurst, Zhu Han
Numerical results are also presented to evaluate the effectiveness of the proposed anti-jamming precoder against the DIRS-based FPJs and the feasibility of the designed data frame used by the legitimate AP to estimate the statistical characteristics.
1 code implementation • 18 Aug 2023 • Yi Cai, Gerhard Wunder
Attribution methods shed light on the explainability of data-driven approaches such as deep learning models by uncovering the most influential features in a to-be-explained decision.
no code implementations • 7 Jul 2023 • Huan Huang, Hongliang Zhang, Yi Cai, A. Lee Swindlehurst, Zhu Han
Emerging intelligent reflecting surfaces (IRSs) significantly improve system performance, while also pose a huge risk for physical layer security.
no code implementations • 6 May 2023 • Da Ren, Yi Cai, Qing Li
Generative Adversarial Networks (GANs) have been studied in text generation to tackle the exposure bias problem.
1 code implementation • journal 2023 • Bingshan Zhu, Yi Cai, Haopeng Ren
In this paper, we use the commonsense relationship between words as a bridge to connect the words in each document.
no code implementations • 11 Feb 2023 • Yi Cai, Arthur Zimek, Eirini Ntoutsi, Gerhard Wunder
The importance of neighborhood construction in local explanation methods has been already highlighted in the literature.
no code implementations • 3 Feb 2023 • Dongsheng Xu, Qingbao Huang, Feng Shuang, Yi Cai
One possible reason is that current studies mainly focus on constructing the plane-level geometric relationship of scene text without depth information.
no code implementations • 1 Feb 2023 • Huan Huang, Ying Zhang, Hongliang Zhang, Yi Cai, A. Lee Swindlehurst, Zhu Han
A theoretical analysis of the proposed DIRS-based FPJ that provides an evaluation of the DIRS-based jamming attacks is derived.
no code implementations • 2 Dec 2022 • XiaoDong Li, Chenxin Zou, Yi Cai, Yuelong Zhu
Winner of the game is thus considered to have better qualities.
1 code implementation • 28 Nov 2022 • Li Yuan, Yi Cai, Jin Wang, Qing Li
This paper is the first to propose jointly performing MNER and MRE as a joint multimodal entity-relation extraction task (JMERE).
1 code implementation • 2023 2022 • Yuqi Bu, Liuwu Li, Jiayuan Xie, Qiong Liu, Yi Cai, Qingbao Huang, Qing Li
Abstract—Referring expression comprehension (REC) aims to identify and locate a specific object in visual scenes referred to by a natural language expression.
1 code implementation • 7 Sep 2022 • Yi Cai, Arthur Zimek, Gerhard Wunder, Eirini Ntoutsi
Hate speech detection is a common downstream application of natural language processing (NLP) in the real world.
1 code implementation • 16 Jul 2022 • Zixuan Zhou, Xuefei Ning, Yi Cai, Jiashu Han, Yiping Deng, Yuhan Dong, Huazhong Yang, Yu Wang
Specifically, we train the supernet with a large sharing extent (an easier curriculum) at the beginning and gradually decrease the sharing extent of the supernet (a harder curriculum).
no code implementations • 27 Apr 2022 • Jiexin Wang, Adam Jatowt, Masatoshi Yoshikawa, Yi Cai
Time is an important aspect of documents and is used in a range of NLP and IR tasks.
no code implementations • 2 Dec 2021 • Junsheng Kong, Weizhao Li, Ben Liao, Jiezhong Qiu, Chang-Yu, Hsieh, Yi Cai, Jinhui Zhu, Shengyu Zhang
Then, NES computes the network embedding from this representative subgraph, efficiently.
1 code implementation • 30 Sep 2021 • Yi Cai, Arthur Zimek, Eirini Ntoutsi
The importance of the neighborhood for training a local surrogate model to approximate the local decision boundary of a black box classifier has been already highlighted in the literature.
no code implementations • 15 Sep 2021 • Junsheng Kong, Weizhao Li, Zeyi Liu, Ben Liao, Jiezhong Qiu, Chang-Yu Hsieh, Yi Cai, Shengyu Zhang
In this work, we show that with merely a small fraction of contexts (Q-contexts)which are typical in the whole corpus (and their mutual information with words), one can construct high-quality word embedding with negligible errors.
no code implementations • AAAI Workshop AdvML 2022 • Yi Cai, Xuefei Ning, Huazhong Yang, Yu Wang
It provides high scalability because the paths within an EIO network exponentially increase with the network depth.
no code implementations • 2 Feb 2021 • Guodong Yin, Yi Cai, Juejian Wu, Zhengyang Duan, Zhenhua Zhu, Yongpan Liu, Yu Wang, Huazhong Yang, Xueqing Li
Compute-in-memory (CiM) is a promising approach to alleviating the memory wall problem for domain-specific applications.
Emerging Technologies
no code implementations • COLING 2020 • Haopeng Ren, Yi Cai, Xiaofeng Chen, Guohua Wang, Qing Li
Relation Classification (RC) plays an important role in natural language processing (NLP).
1 code implementation • COLING 2020 • Changmeng Zheng, Yi Cai, Guanjie Zhang, Qing Li
Entities are the major proportion and build up the topic of text summaries.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Junsheng Kong, Zhicheng Zhong, Yi Cai, Xin Wu, Da Ren
Neural response generative models have achieved remarkable progress in recent years but tend to yield irrelevant and uninformative responses.
no code implementations • ACL 2020 • Qingbao Huang, Jielong Wei, Yi Cai, Changmeng Zheng, Junying Chen, Ho-fung Leung, Qing Li
Visual question answering aims to answer the natural language question about a given image.
no code implementations • IJCNLP 2019 • Xingwei Tan, Yi Cai, Changxi Zhu
Aspect-level sentiment classification, which is a fine-grained sentiment analysis task, has received lots of attention these years.
1 code implementation • IJCNLP 2019 • Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, Gu Xu, ong
We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels.
Ranked #11 on
Named Entity Recognition (NER)
on GENIA
no code implementations • COLING 2016 • Kai Yang, Yi Cai, Zhenhong Chen, Ho-fung Leung, Raymond Lau
Latent Dirichlet Allocation (LDA) and its variants have been widely used to discover latent topics in textual documents.