Search Results for author: Yawen Cui

Found 11 papers, 3 papers with code

FusionMamba: Dynamic Feature Enhancement for Multimodal Image Fusion with Mamba

no code implementations15 Apr 2024 Xinyu Xie, Yawen Cui, Chio-in Ieong, Tao Tan, Xiaozhi Zhang, Xubin Zheng, Zitong Yu

In this paper, we propose FusionMamba, a novel dynamic feature enhancement method for multimodal image fusion with Mamba.

Infrared And Visible Image Fusion

Hyperbolic Face Anti-Spoofing

no code implementations17 Aug 2023 Shuangpeng Han, Rizhao Cai, Yawen Cui, Zitong Yu, Yongjian Hu, Alex Kot

To further improve generalization, we conduct hyperbolic contrastive learning for the bonafide only while relaxing the constraints on diverse spoofing attacks.

Contrastive Learning Face Anti-Spoofing +1

Visual Prompt Flexible-Modal Face Anti-Spoofing

no code implementations26 Jul 2023 Zitong Yu, Rizhao Cai, Yawen Cui, Ajian Liu, Changsheng chen

Recently, vision transformer based multimodal learning methods have been proposed to improve the robustness of face anti-spoofing (FAS) systems.

Face Anti-Spoofing

A Comprehensive Survey on Segment Anything Model for Vision and Beyond

1 code implementation14 May 2023 Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian Yang, Yuehong Hu

As the first to comprehensively review the progress of segmenting anything task for vision and beyond based on the foundation model of SAM, this work focuses on its applications to various tasks and data types by discussing its historical development, recent progress, and profound impact on broad applications.

Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less

no code implementations ICCV 2023 Rizhao Cai, Yawen Cui, Zhi Li, Zitong Yu, Haoliang Li, Yongjian Hu, Alex Kot

To alleviate the forgetting of previous domains without using previous data, we propose the Proxy Prototype Contrastive Regularization (PPCR) to constrain the continual learning with previous domain knowledge from the proxy prototypes.

Continual Learning Domain Generalization +1

Generalized Few-Shot Continual Learning with Contrastive Mixture of Adapters

1 code implementation12 Feb 2023 Yawen Cui, Zitong Yu, Rizhao Cai, Xun Wang, Alex C. Kot, Li Liu

The goal of Few-Shot Continual Learning (FSCL) is to incrementally learn novel tasks with limited labeled samples and preserve previous capabilities simultaneously, while current FSCL methods are all for the class-incremental purpose.

Continual Learning Contrastive Learning +2

Rethinking Vision Transformer and Masked Autoencoder in Multimodal Face Anti-Spoofing

no code implementations11 Feb 2023 Zitong Yu, Rizhao Cai, Yawen Cui, Xin Liu, Yongjian Hu, Alex Kot

In this paper, we investigate three key factors (i. e., inputs, pre-training, and finetuning) in ViT for multimodal FAS with RGB, Infrared (IR), and Depth.

Face Anti-Spoofing

PhysFormer++: Facial Video-based Physiological Measurement with SlowFast Temporal Difference Transformer

no code implementations7 Feb 2023 Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Yawen Cui, Jiehua Zhang, Philip Torr, Guoying Zhao

As key modules in PhysFormer, the temporal difference transformers first enhance the quasi-periodic rPPG features with temporal difference guided global attention, and then refine the local spatio-temporal representation against interference.

Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning

1 code implementation24 Jan 2023 Yawen Cui, Wanxia Deng, Haoyu Chen, Li Liu

Given a model well-trained with a large-scale base dataset, Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding overfitting, without catastrophically forgetting all encountered classes previously.

Few-Shot Class-Incremental Learning Incremental Learning +1

Rethinking Few-Shot Class-Incremental Learning with Open-Set Hypothesis in Hyperbolic Geometry

no code implementations20 Jul 2022 Yawen Cui, Zitong Yu, Wei Peng, Li Liu

Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding the overfitting and catastrophic forgetting simultaneously.

Few-Shot Class-Incremental Learning Incremental Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.