no code implementations • 15 Dec 2022 • Sijie Mai, Ya Sun, Haifeng Hu
To assist the correlation learning, we feed the training pairs to the model according to difficulty by the proposed curriculum learning, which consists of elaborately designed scoring and feeding functions.
no code implementations • 22 Nov 2022 • Jianfeng Wu, Sijie Mai, Haifeng Hu
In this paper, we introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction, which adapts contrastive learning with a novel sampling method based on clustering algorithm to enhance the role of relation and improve the generalization ability to unseen relations.
1 code implementation • 31 Oct 2022 • Sijie Mai, Ying Zeng, Haifeng Hu
To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations.
Multimodal Emotion Recognition Multimodal Sentiment Analysis
1 code implementation • 12 May 2022 • Jiahua Rao, Shuangjia Zheng, Sijie Mai, Yuedong Yang
To address these problems, we propose a novel Communicative Subgraph representation learning for Multi-relational Inductive drug-Gene interactions prediction (CoSMIG), where the predictions of drug-gene relations are made through subgraph patterns, and thus are naturally inductive for unseen drugs/genes without retraining or utilizing external domain features.
no code implementations • Findings (EMNLP) 2021 • Ying Zeng, Sijie Mai, Haifeng Hu
On the other hand, noisy information hidden in each modality interferes the learning of correct cross-modal dynamics.
no code implementations • 4 Sep 2021 • Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng Hu
Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning (that is why we call it hybrid contrastive learning), with which the model can fully explore cross-modal interactions, preserve inter-class relationships and reduce the modality gap.
1 code implementation • 17 Aug 2021 • Jianfeng Wu, Sijie Mai, Haifeng Hu
In this paper, we introduce Graph Capsule Aggregation (GraphCAGE) to model unaligned multimodal sequences with graph-based neural model and Capsule Network.
no code implementations • 26 Jul 2021 • Shuangjia Zheng, Sijie Mai, Ya Sun, Haifeng Hu, Yuedong Yang
In this way, we find the model can quickly adapt to few-shot relationships using only a handful of known facts with inductive settings.
1 code implementation • 16 Dec 2020 • Sijie Mai, Shuangjia Zheng, Yuedong Yang, Haifeng Hu
Relation prediction for knowledge graphs aims at predicting missing relationships between entities.
no code implementations • 27 Nov 2020 • Sijie Mai, Songlong Xing, Jiaxuan He, Ying Zeng, Haifeng Hu
A majority of existing works generally focus on aligned fusion, mostly at word level, of the three modalities to accomplish this task, which is impractical in real-world scenarios.
1 code implementation • 18 Nov 2019 • Sijie Mai, Haifeng Hu, Songlong Xing
Visualization of the learned embeddings suggests that the joint embedding space learned by our method is discriminative.
no code implementations • ACL 2019 • Sijie Mai, Haifeng Hu, Songlong Xing
We propose a general strategy named {`}divide, conquer and combine{'} for multimodal fusion.