Search Results for author: Sijie Mai

Found 12 papers, 5 papers with code

Curriculum Learning Meets Weakly Supervised Modality Correlation Learning

no code implementations15 Dec 2022 Sijie Mai, Ya Sun, Haifeng Hu

To assist the correlation learning, we feed the training pairs to the model according to difficulty by the proposed curriculum learning, which consists of elaborately designed scoring and feeding functions.

Multimodal Sentiment Analysis Self-Supervised Learning

Relation-dependent Contrastive Learning with Cluster Sampling for Inductive Relation Prediction

no code implementations22 Nov 2022 Jianfeng Wu, Sijie Mai, Haifeng Hu

In this paper, we introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction, which adapts contrastive learning with a novel sampling method based on clustering algorithm to enhance the role of relation and improve the generalization ability to unseen relations.

Contrastive Learning Inductive Relation Prediction +1

Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations

1 code implementation31 Oct 2022 Sijie Mai, Ying Zeng, Haifeng Hu

To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations.

Multimodal Emotion Recognition Multimodal Sentiment Analysis

Communicative Subgraph Representation Learning for Multi-Relational Inductive Drug-Gene Interaction Prediction

1 code implementation12 May 2022 Jiahua Rao, Shuangjia Zheng, Sijie Mai, Yuedong Yang

To address these problems, we propose a novel Communicative Subgraph representation learning for Multi-relational Inductive drug-Gene interactions prediction (CoSMIG), where the predictions of drug-gene relations are made through subgraph patterns, and thus are naturally inductive for unseen drugs/genes without retraining or utilizing external domain features.

Gene Interaction Prediction Representation Learning

Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis

no code implementations4 Sep 2021 Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng Hu

Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning (that is why we call it hybrid contrastive learning), with which the model can fully explore cross-modal interactions, preserve inter-class relationships and reduce the modality gap.

Contrastive Learning Multimodal Sentiment Analysis

Graph Capsule Aggregation for Unaligned Multimodal Sequences

1 code implementation17 Aug 2021 Jianfeng Wu, Sijie Mai, Haifeng Hu

In this paper, we introduce Graph Capsule Aggregation (GraphCAGE) to model unaligned multimodal sequences with graph-based neural model and Capsule Network.

Multimodal Sentiment Analysis

Subgraph-aware Few-Shot Inductive Link Prediction via Meta-Learning

no code implementations26 Jul 2021 Shuangjia Zheng, Sijie Mai, Ya Sun, Haifeng Hu, Yuedong Yang

In this way, we find the model can quickly adapt to few-shot relationships using only a handful of known facts with inductive settings.

Inductive Link Prediction Knowledge Graphs +2

Analyzing Unaligned Multimodal Sequence via Graph Convolution and Graph Pooling Fusion

no code implementations27 Nov 2020 Sijie Mai, Songlong Xing, Jiaxuan He, Ying Zeng, Haifeng Hu

A majority of existing works generally focus on aligned fusion, mostly at word level, of the three modalities to accomplish this task, which is impractical in real-world scenarios.

Cannot find the paper you are looking for? You can Submit a new open access paper.