Search Results for author: Chunhui Liu

Found 10 papers, 2 papers with code

PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding

no code implementations22 Mar 2017 Chunhui Liu, Yueyu Hu, Yanghao Li, Sijie Song, Jiaying Liu

Despite the fact that many 3D human activity benchmarks being proposed, most existing action datasets focus on the action recognition tasks for the segmented videos.

Action Detection Action Recognition +2

Patch Correspondences for Interpreting Pixel-level CNNs

no code implementations29 Nov 2017 Victor Fragoso, Chunhui Liu, Aayush Bansal, Deva Ramanan

We present compositional nearest neighbors (CompNN), a simple approach to visually interpreting distributed representations learned by a convolutional neural network (CNN) for pixel-level tasks (e. g., image synthesis and segmentation).

Image-to-Image Translation Segmentation +2

Triplet Online Instance Matching Loss for Person Re-identification

no code implementations24 Feb 2020 Ye Li, Guangqiang Yin, Chunhui Liu, Xiaoyu Yang, Zhiguo Wang

Triplet loss processes batch construction in a complicated and fussy way and converges slowly.

Person Re-Identification

NUTA: Non-uniform Temporal Aggregation for Action Recognition

no code implementations15 Dec 2020 Xinyu Li, Chunhui Liu, Bing Shuai, Yi Zhu, Hao Chen, Joseph Tighe

In the world of action recognition research, one primary focus has been on how to construct and train networks to model the spatial-temporal volume of an input video.

Action Recognition

Selective Feature Compression for Efficient Activity Recognition Inference

no code implementations ICCV 2021 Chunhui Liu, Xinyu Li, Hao Chen, Davide Modolo, Joseph Tighe

In this work, we focus on improving the inference efficiency of current action recognition backbones on trimmed videos, and illustrate that one action model can also cover then informative region by dropping non-informative features.

Action Recognition Feature Compression

VidTr: Video Transformer Without Convolutions

no code implementations ICCV 2021 Yanyi Zhang, Xinyu Li, Chunhui Liu, Bing Shuai, Yi Zhu, Biagio Brattoli, Hao Chen, Ivan Marsic, Joseph Tighe

We first introduce the vanilla video transformer and show that transformer module is able to perform spatio-temporal modeling from raw pixels, but with heavy memory usage.

Action Classification Action Recognition +1

SSCAP: Self-supervised Co-occurrence Action Parsing for Unsupervised Temporal Action Segmentation

no code implementations29 May 2021 Zhe Wang, Hao Chen, Xinyu Li, Chunhui Liu, Yuanjun Xiong, Joseph Tighe, Charless Fowlkes

However, it is quite expensive to annotate every frame in a large corpus of videos to construct a comprehensive supervised training dataset.

Action Parsing Action Segmentation +2

LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval

no code implementations11 Jul 2022 Jinbin Bai, Chunhui Liu, Feiyue Ni, Haofan Wang, Mengying Hu, Xiaofeng Guo, Lele Cheng

To overcome the above issue, we present a novel mechanism for learning the translation relationship from a source modality space $\mathcal{S}$ to a target modality space $\mathcal{T}$ without the need for a joint latent space, which bridges the gap between visual and textual domains.

Representation Learning Retrieval +4

Cannot find the paper you are looking for? You can Submit a new open access paper.