Search Results for author: Haifeng Hu

Found 21 papers, 8 papers with code

Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering

1 code implementation EMNLP 2020 Zujie Liang, Weitao Jiang, Haifeng Hu, Jiaying Zhu

In the task of Visual Question Answering (VQA), most state-of-the-art models tend to learn spurious correlations in the training set and achieve poor performance in out-of-distribution test data.

Contrastive Learning Question Answering +1

Curriculum Learning Meets Weakly Supervised Modality Correlation Learning

no code implementations15 Dec 2022 Sijie Mai, Ya Sun, Haifeng Hu

To assist the correlation learning, we feed the training pairs to the model according to difficulty by the proposed curriculum learning, which consists of elaborately designed scoring and feeding functions.

Multimodal Sentiment Analysis Self-Supervised Learning

An open unified deep graph learning framework for discovering drug leads

1 code implementation6 Dec 2022 Yueming Yin, Haifeng Hu, Zhen Yang, Jitao Yang, Chun Ye, JianSheng Wu, Wilson Wen Bin Goh

However, this is non-ideal, as clumsy integration of incompatible models increases research overheads, and may even reduce success rates in drug discovery.

Drug Discovery Graph Attention +3

Relation-dependent Contrastive Learning with Cluster Sampling for Inductive Relation Prediction

no code implementations22 Nov 2022 Jianfeng Wu, Sijie Mai, Haifeng Hu

In this paper, we introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction, which adapts contrastive learning with a novel sampling method based on clustering algorithm to enhance the role of relation and improve the generalization ability to unseen relations.

Contrastive Learning Inductive Relation Prediction

Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations

1 code implementation31 Oct 2022 Sijie Mai, Ying Zeng, Haifeng Hu

To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations.

Multimodal Emotion Recognition Multimodal Sentiment Analysis

Multimodal Contrastive Learning via Uni-Modal Coding and Cross-Modal Prediction for Multimodal Sentiment Analysis

no code implementations26 Oct 2022 Ronghao Lin, Haifeng Hu

The former is like encoding robust uni-modal representation while the later is like integrating interactive information among different modalities, both of which are critical to learning an effective multimodal representation.

Contrastive Learning Multimodal Sentiment Analysis +1

PSA-Det3D: Pillar Set Abstraction for 3D object Detection

no code implementations20 Oct 2022 Zhicong Huang, Jingwen Zhao, Zhijie Zheng, Dihu Chena, Haifeng Hu

In this paper, we propose a pillar set abstraction (PSA) and foreground point compensation (FPC) and design a point-based detection network, PSA-Det3D, to improve the detection performance for small object.

3D Object Detection object-detection +1

Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis

no code implementations4 Sep 2021 Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng Hu

Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning (that is why we call it hybrid contrastive learning), with which the model can fully explore cross-modal interactions, preserve inter-class relationships and reduce the modality gap.

Contrastive Learning Multimodal Sentiment Analysis

Graph Capsule Aggregation for Unaligned Multimodal Sequences

1 code implementation17 Aug 2021 Jianfeng Wu, Sijie Mai, Haifeng Hu

In this paper, we introduce Graph Capsule Aggregation (GraphCAGE) to model unaligned multimodal sequences with graph-based neural model and Capsule Network.

Multimodal Sentiment Analysis

Subgraph-aware Few-Shot Inductive Link Prediction via Meta-Learning

no code implementations26 Jul 2021 Shuangjia Zheng, Sijie Mai, Ya Sun, Haifeng Hu, Yuedong Yang

In this way, we find the model can quickly adapt to few-shot relationships using only a handful of known facts with inductive settings.

Inductive Link Prediction Knowledge Graphs +1

LPF: A Language-Prior Feedback Objective Function for De-biased Visual Question Answering

1 code implementation29 May 2021 Zujie Liang, Haifeng Hu, Jiaying Zhu

Most existing Visual Question Answering (VQA) systems tend to overly rely on language bias and hence fail to reason from the visual clue.

Question Answering Visual Question Answering

Analyzing Unaligned Multimodal Sequence via Graph Convolution and Graph Pooling Fusion

no code implementations27 Nov 2020 Sijie Mai, Songlong Xing, Jiaxuan He, Ying Zeng, Haifeng Hu

A majority of existing works generally focus on aligned fusion, mostly at word level, of the three modalities to accomplish this task, which is impractical in real-world scenarios.

Universal Multi-Source Domain Adaptation

no code implementations5 Nov 2020 Yueming Yin, Zhen Yang, Haifeng Hu, Xiaofu Wu

Recent study reveals that knowledge can be transferred from one source domain to another unknown target domain, called Universal Domain Adaptation (UDA).

Universal Domain Adaptation Unsupervised Domain Adaptation

Unveiling Class-Labeling Structure for Universal Domain Adaptation

no code implementations10 Oct 2020 Yueming Yin, Zhen Yang, Xiaofu Wu, Haifeng Hu

As a more practical setting for unsupervised domain adaptation, Universal Domain Adaptation (UDA) is recently introduced, where the target label set is unknown.

Universal Domain Adaptation Unsupervised Domain Adaptation

Adaptive Interaction Modeling via Graph Operations Search

1 code implementation CVPR 2020 Haoxin Li, Wei-Shi Zheng, Yu Tao, Haifeng Hu, Jian-Huang Lai

We propose to search the network structures with differentiable architecture search mechanism, which learns to construct adaptive structures for different videos to facilitate adaptive interaction modeling.

Action Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.