Search Results for author: Huabin Liu

Found 6 papers, 3 papers with code

Few-shot Action Recognition via Intra- and Inter-Video Information Maximization

no code implementations10 May 2023 Huabin Liu, Weiyao Lin, Tieyuan Chen, Yuxi Li, Shuyuan Li, John See

The alignment model performs temporal and spatial action alignment sequentially at the feature level, leading to more precise measurements of inter-video similarity.

Few-Shot action recognition Few Shot Action Recognition +2

Task-adaptive Spatial-Temporal Video Sampler for Few-shot Action Recognition

1 code implementation20 Jul 2022 Huabin Liu, Weixian Lv, John See, Weiyao Lin

In this paper, we propose a novel video frame sampler for few-shot action recognition to address this issue, where task-specific spatial-temporal frame sampling is achieved via a temporal selector (TS) and a spatial amplifier (SA).

Few-Shot action recognition Few Shot Action Recognition

Speed Up Object Detection on Gigapixel-Level Images With Patch Arrangement

no code implementations CVPR 2022 Jiahao Fan, Huabin Liu, Wenjie Yang, John See, Aixin Zhang, Weiyao Lin

With the appearance of super high-resolution (e. g., gigapixel-level) images, performing efficient object detection on such images becomes an important issue.

object-detection Real-Time Object Detection

TA2N: Two-Stage Action Alignment Network for Few-shot Action Recognition

1 code implementation10 Jul 2021 Shuyuan Li, Huabin Liu, Rui Qian, Yuxi Li, John See, Mengjuan Fei, Xiaoyuan Yu, Weiyao Lin

The first stage locates the action by learning a temporal affine transform, which warps each video feature to its action duration while dismissing the action-irrelevant feature (e. g. background).

Few-Shot action recognition Few Shot Action Recognition +2

Human in Events: A Large-Scale Benchmark for Human-centric Video Analysis in Complex Events

no code implementations9 May 2020 Weiyao Lin, Huabin Liu, Shizhan Liu, Yuxi Li, Rui Qian, Tao Wang, Ning Xu, Hongkai Xiong, Guo-Jun Qi, Nicu Sebe

To this end, we present a new large-scale dataset with comprehensive annotations, named Human-in-Events or HiEve (Human-centric video analysis in complex Events), for the understanding of human motions, poses, and actions in a variety of realistic events, especially in crowd & complex events.

Action Recognition Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.