Search Results for author: Qiangqiang Wu

Found 11 papers, 2 papers with code

Learning Tracking Representations from Single Point Annotations

no code implementations15 Apr 2024 Qiangqiang Wu, Antoni B. Chan

In this paper, we propose to learn tracking representations from single point annotations (i. e., 4. 5x faster to annotate than the traditional bounding box) in a weakly supervised manner.

Contrastive Learning Visual Tracking

Robust Unsupervised Crowd Counting and Localization with Adaptive Resolution SAM

no code implementations27 Feb 2024 Jia Wan, Qiangqiang Wu, Wei Lin, Antoni B. Chan

The existing crowd counting models require extensive training data, which is time-consuming to annotate.

Crowd Counting

Scalable Video Object Segmentation with Simplified Framework

no code implementations ICCV 2023 Qiangqiang Wu, Tianyu Yang, Wei Wu, Antoni Chan

The current popular methods for video object segmentation (VOS) implement feature matching through several hand-crafted modules that separately perform feature extraction and matching.

Object Semantic Segmentation +2

DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks

1 code implementation CVPR 2023 Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, Antoni B. Chan

However, we find that this simple baseline heavily relies on spatial cues while ignoring temporal relations for frame reconstruction, thus leading to sub-optimal temporal matching representations for VOT and VOS.

 Ranked #1 on Visual Object Tracking on TrackingNet (AUC metric)

Semantic Segmentation Video Object Segmentation +2

A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds

1 code implementation8 Mar 2022 Yan Xia, Qiangqiang Wu, Wei Li, Antoni B. Chan, Uwe Stilla

Recent works on 3D single object tracking treat the task as a target-specific 3D detection task, where an off-the-shelf 3D detector is commonly employed for the tracking.

3D Single Object Tracking motion prediction +1

Progressive Unsupervised Learning for Visual Object Tracking

no code implementations CVPR 2021 Qiangqiang Wu, Jia Wan, Antoni B. Chan

In this paper, we propose a progressive unsupervised learning (PUL) framework, which entirely removes the need for annotated training videos in visual tracking.

Contrastive Learning Object +2

End-to-end Learning of Object Motion Estimation from Retinal Events for Event-based Object Tracking

no code implementations14 Feb 2020 Haosheng Chen, David Suter, Qiangqiang Wu, Hanzi Wang

We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform an end-to-end 5-DoF object motion regression.

Motion Estimation Object +2

Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking

no code implementations13 Feb 2020 Haosheng Chen, Qiangqiang Wu, Yanjie Liang, Xinbo Gao, Hanzi Wang

To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm, which asynchronously and effectively warps the spatio-temporal information of asynchronous retinal events to a sequence of ATSLTD frames with clear object contours.

Object Object Tracking

Hallucinated Adversarial Learning for Robust Visual Tracking

no code implementations17 Jun 2019 Qiangqiang Wu, Zhihui Chen, Lin Cheng, Yan Yan, Bo Li, Hanzi Wang

Incorporating such an ability to hallucinate diverse new samples of the tracked instance can help the trackers alleviate the over-fitting problem in the low-data tracking regime.

Visual Tracking

DSNet: Deep and Shallow Feature Learning for Efficient Visual Tracking

no code implementations6 Nov 2018 Qiangqiang Wu, Yan Yan, Yanjie Liang, Yi Liu, Hanzi Wang

In recent years, Discriminative Correlation Filter (DCF) based tracking methods have achieved great success in visual tracking.

Image Classification Visual Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.