Search Results for author: Qi Fan

Found 14 papers, 8 papers with code

Few-Shot Video Object Detection

1 code implementation30 Apr 2021 Qi Fan, Chi-Keung Tang, Yu-Wing Tai

We introduce Few-Shot Video Object Detection (FSVOD) with three contributions to real-world visual learning challenge in our highly diverse and dynamic world: 1) a large-scale video dataset FSVOD-500 comprising of 500 classes with class-balanced videos in each category for few-shot learning; 2) a novel Tube Proposal Network (TPN) to generate high-quality video tube proposals for aggregating feature representation for the target video object which can be highly dynamic; 3) a strategically improved Temporal Matching Network (TMN+) for matching representative query tube features with better discriminative ability thus achieving higher diversity.

Few-Shot Video Object Detection Object +2

Self-Support Few-Shot Semantic Segmentation

1 code implementation23 Jul 2022 Qi Fan, Wenjie Pei, Yu-Wing Tai, Chi-Keung Tang

Motivated by the simple Gestalt principle that pixels belonging to the same object are more similar than those to different objects of same class, we propose a novel self-support matching strategy to alleviate this problem, which uses query prototypes to match query features, where the query prototypes are collected from high-confidence query predictions.

Few-Shot Semantic Segmentation Segmentation +1

Stable Segment Anything Model

1 code implementation27 Nov 2023 Qi Fan, Xin Tao, Lei Ke, Mingqiao Ye, Yuan Zhang, Pengfei Wan, Zhongyuan Wang, Yu-Wing Tai, Chi-Keung Tang

Thus, our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retaining SAM's powerful promptable segmentation efficiency and generality, with 3) minimal learnable parameters (0. 08 M) and fast adaptation (by 1 training epoch).

Segmentation

Group Collaborative Learning for Co-Salient Object Detection

1 code implementation CVPR 2021 Qi Fan, Deng-Ping Fan, Huazhu Fu, Chi Keung Tang, Ling Shao, Yu-Wing Tai

We present a novel group collaborative learning framework (GCoNet) capable of detecting co-salient objects in real time (16ms), by simultaneously mining consensus representations at group level based on the two necessary criteria: 1) intra-group compactness to better formulate the consistency among co-salient objects by capturing their inherent shared attributes using our novel group affinity module; 2) inter-group separability to effectively suppress the influence of noisy objects on the output by introducing our new group collaborating module conditioning the inconsistent consensus.

Co-Salient Object Detection Object +2

GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector

2 code implementations30 May 2022 Peng Zheng, Huazhu Fu, Deng-Ping Fan, Qi Fan, Jie Qin, Yu-Wing Tai, Chi-Keung Tang, Luc van Gool

In this paper, we present a novel end-to-end group collaborative learning network, termed GCoNet+, which can effectively and efficiently (250 fps) identify co-salient objects in natural scenes.

Co-Salient Object Detection Object +2

Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Realistic Incomplete Data Scenarios

1 code implementation21 Sep 2023 Qi Fan, Haolin Zuo, Rui Liu, Zheng Lian, Guanglai Gao

Multimodal emotion recognition (MER) in practical scenarios presents a significant challenge due to the presence of incomplete data, such as missing or noisy data.

Multimodal Emotion Recognition

Real-Time Influence Maximization on Dynamic Social Streams

no code implementations6 Feb 2017 Yanhao Wang, Qi Fan, Yuchen Li, Kian-Lee Tan

Influence maximization (IM), which selects a set of $k$ users (called seeds) to maximize the influence spread over a social network, is a fundamental problem in a wide range of applications such as viral marketing and network monitoring.

Social and Information Networks Data Structures and Algorithms

Normalization Perturbation: A Simple Domain Generalization Method for Real-World Domain Shifts

no code implementations8 Nov 2022 Qi Fan, Mattia Segu, Yu-Wing Tai, Fisher Yu, Chi-Keung Tang, Bernt Schiele, Dengxin Dai

Thus, we propose to perturb the channel statistics of source domain features to synthesize various latent styles, so that the trained deep model can perceive diverse potential domains and generalizes well even without observations of target domain data in training.

Autonomous Driving Domain Generalization

UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks

no code implementations7 Jun 2023 Yanan sun, Zihan Zhong, Qi Fan, Chi-Keung Tang, Yu-Wing Tai

Our thorough studies validate that models pre-trained as such can learn rich representations of both modalities, improving their ability to understand how images and text relate to each other.

Semantic Segmentation

Selective Feature Adapter for Dense Vision Transformers

no code implementations3 Oct 2023 Xueqing Deng, Qi Fan, Xiaojie Jin, Linjie Yang, Peng Wang

Specifically, SFA consists of external adapters and internal adapters which are sequentially operated over a transformer model.

Depth Estimation

DARNet: Bridging Domain Gaps in Cross-Domain Few-Shot Segmentation with Dynamic Adaptation

no code implementations8 Dec 2023 Haoran Fan, Qi Fan, Maurice Pagnucco, Yang song

Moreover, recognizing the variability across target domains, an Adaptive Refine Self-Matching (ARSM) method is also proposed to adjust the matching threshold and dynamically refine the prediction result with the self-matching method, enhancing accuracy.

Cross-Domain Few-Shot Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.