1 code implementation • CVPR 2022 • Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie
More notably, our SDMP is the first method that successfully leverages data mixing to improve (rather than hurt) the performance of Vision Transformers in the self-supervised setting.
no code implementations • 29 May 2022 • Zheng Xiong, Liangyu Chai, Wenxi Liu, Yongtuo Liu, Sucheng Ren, Shengfeng He
To enable training under this new setting, we convert the crowd count regression problem to a ranking potential prediction problem.
1 code implementation • CVPR 2022 • Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, Shengfeng He
Although previous research can leverage generative priors to produce high-resolution results, their quality can suffer from the entangled semantics of the latent space.
1 code implementation • CVPR 2022 • Zhixuan Zhong, Liangyu Chai, Yang Zhou, Bailin Deng, Jia Pan, Shengfeng He
This paper presents a Generative prior ReciprocAted Invertible rescaling Network (GRAIN) for generating faithful high-resolution (HR) images from low-resolution (LR) invertible images with an extreme upscaling factor (64x).
1 code implementation • CVPR 2022 • Sucheng Ren, Daquan Zhou, Shengfeng He, Jiashi Feng, Xinchao Wang
This novel merging scheme enables the self-attention to learn relationships between objects with different sizes and simultaneously reduces the token numbers and the computational cost.
1 code implementation • ICCV 2021 • Qi Li, Weixiang Yang, Wenxi Liu, Yuanlong Yu, Shengfeng He
Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications.
no code implementations • 6 Aug 2021 • Yongtuo Liu, Dan Xu, Sucheng Ren, Hanjie Wu, Hongmin Cai, Shengfeng He
We further leverage the derived segments to propose a crowd-aware fine-grained domain adaptation framework for crowd counting, which consists of two novel adaptation modules, i. e., Crowd Region Transfer (CRT) and Crowd Density Alignment (CDA).
no code implementations • 6 Aug 2021 • Yongtuo Liu, Sucheng Ren, Liangyu Chai, Hanjie Wu, Jing Qin, Dan Xu, Shengfeng He
In this way, we can transfer the original spatial labeling redundancy caused by individual similarities to effective supervision signals on the unlabeled regions.
1 code implementation • 5 Aug 2021 • Sucheng Ren, Qiang Wen, Nanxuan Zhao, Guoqiang Han, Shengfeng He
In this paper, we introduce a new attention-based encoder, vision transformer, into salient object detection to ensure the globalization of the representations from shallow to deep layers.
2 code implementations • ICCV 2021 • Yangyang Xu, Yong Du, Wenpeng Xiao, Xuemiao Xu, Shengfeng He
This inborn property is used for two unique purposes: 1) regularizing the joint inversion process, such that each of the inverted code is semantically accessible from one of the other and fastened in a editable domain; 2) enforcing inter-image coherence, such that the fidelity of each inverted code can be maximized with the complement of other images.
no code implementations • CVPR 2022 • Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, Hang Zhao
Transformers recently are adapted from the community of natural language processing as a promising substitute of convolution-based neural networks for visual learning tasks.
1 code implementation • CVPR 2021 • Sucheng Ren, Wenxi Liu, Yongtuo Liu, Haoxin Chen, Guoqiang Han, Shengfeng He
Additionally, to exclude the information of the moving background objects from motion features, our transformation module enables to reciprocally transform the appearance features to enhance the motion features, so as to focus on the moving objects with salient appearance while removing the co-moving outliers.
Ranked #3 on
Unsupervised Video Object Segmentation
on DAVIS 2016
(using extra training data)
1 code implementation • CVPR 2021 • Haoxin Chen, Hanjie Wu, Nanxuan Zhao, Sucheng Ren, Shengfeng He
The key is to model the relationship between the query videos and the support images for propagating the object information.
no code implementations • CVPR 2021 • Sucheng Ren, Yong Du, Jianming Lv, Guoqiang Han, Shengfeng He
To these ends, we introduce a trainable "master" network which ingests both audio signals and silent lip videos instead of a pretrained teacher.
1 code implementation • CVPR 2021 • Han Deng, Chu Han, Hongmin Cai, Guoqiang Han, Shengfeng He
In this paper, we take a different perspective to break down the makeup transfer problem into a two-step extraction-assignment process.
1 code implementation • CVPR 2021 • Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Yuexin Ma, Shengfeng He, Jia Pan
Furthermore, our model runs at 35 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
Autonomous Driving
Monocular Cross-View Road Scene Parsing(Road)
+1
1 code implementation • CVPR 2021 • Huiting Yang, Liangyu Chai, Qiang Wen, Shuang Zhao, Zixun Sun, Shengfeng He
In this way, arbitrary attributes can be edited by collecting positive data only, and the proposed method learns a controllable representation enabling manipulation of non-binary attributes like anime styles and facial characteristics.
1 code implementation • 19 Apr 2021 • Shengfeng He, Bing Peng, Junyu Dong, Yong Du
Shadow removal is an important yet challenging task in image processing and computer vision.
no code implementations • 31 Mar 2021 • Xin Yang, Yu Qiao, Shaozhe Chen, Shengfeng He, BaoCai Yin, Qiang Zhang, Xiaopeng Wei, Rynson W. H. Lau
Image matting is an ill-posed problem that usually requires additional user input, such as trimaps or scribbles.
2 code implementations • 31 Aug 2020 • Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Wei Liu, Yun-hui Liu
Specifically, given an unlabeled video clip, we compute a series of spatio-temporal statistical summaries, such as the spatial location and dominant direction of the largest motion, the spatial location and dominant color of the largest color diversity along the temporal axis, etc.
no code implementations • ECCV 2020 • Sucheng Ren, Chu Han, Xin Yang, Guoqiang Han, Shengfeng He
In this paper, we propose a simple yet effective approach, named Triple Excitation Network, to reinforce the training of video salient object detection (VSOD) from three aspects, spatial, temporal, and online excitations.
no code implementations • 9 Jun 2020 • Yuzhen Niu, Weifeng Shi, Wenxi Liu, Shengfeng He, Jia Pan, Antoni B. Chan
In this paper, we formulate a novel crowd analysis problem, in which we aim to predict the crowd distribution in the near future given sequential frames of a crowd video without any identity annotations.
1 code implementation • CVPR 2020 • Huaidong Zhang, Xuemiao Xu, Guoqiang Han, Shengfeng He
It avoids the heavy computation of exhaustively searching all the cycle lengths in the video, and, instead, it propagates the coarse prediction for further refinement in a hierarchical manner.
no code implementations • 30 Mar 2020 • Jianbo Jiao, Linchao Bao, Yunchao Wei, Shengfeng He, Honghui Shi, Rynson Lau, Thomas S. Huang
This can be naturally generalized to span multiple scales with a Laplacian pyramid representation of the input data.
no code implementations • ICCV 2019 • Xiaosheng Yan, Yuanlong Yu, Feigege Wang, Wenxi Liu, Shengfeng He, Jia Pan
We conduct comparison experiments on this dataset and demonstrate that our model outperforms the state-of-the-art in tasks of recovering segmentation mask and appearance for occluded vehicles.
1 code implementation • CVPR 2019 • Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Yun-hui Liu, Wei Liu
We conduct extensive experiments with C3D to validate the effectiveness of our proposed approach.
Ranked #40 on
Self-Supervised Action Recognition
on HMDB51
no code implementations • NeurIPS 2018 • Xin Yang, Ke Xu, Shaozhe Chen, Shengfeng He, Baocai Yin Yin, Rynson Lau
Our aim is to discover the most informative sequence of regions for user input in order to produce a good alpha matte with minimum labeling efforts.
no code implementations • 22 Nov 2018 • Yibing Song, Jiawei Zhang, Lijun Gong, Shengfeng He, Linchao Bao, Jinshan Pan, Qingxiong Yang, Ming-Hsuan Yang
We first propose a facial component guided deep Convolutional Neural Network (CNN) to restore a coarse face image, which is denoted as the base image where the facial component is automatically generated from the input face image.
no code implementations • 27 Sep 2018 • Wenxi Liu, Yibing Song, Dengsheng Chen, Shengfeng He, Yuanlong Yu, Tao Yan, Gerhard P. Hancke, Rynson W. H. Lau
In addition, we also propose a gated fusion scheme to control how the variations captured by the deformable convolution affect the original appearance.
no code implementations • 2 Apr 2018 • Xiaowei Hu, Xuemiao Xu, Yongjie Xiao, Hao Chen, Shengfeng He, Jing Qin, Pheng-Ann Heng
Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales.
no code implementations • 10 Nov 2017 • Shao Huang, Weiqiang Wang, Shengfeng He, Rynson W. H. Lau
Egocentric videos, which mainly record the activities carried out by the users of the wearable cameras, have drawn much research attentions in recent years.
no code implementations • ICCV 2017 • Shengfeng He, Jianbo Jiao, Xiaodan Zhang, Guoqiang Han, Rynson W. H. Lau
Experiments show that the proposed multi-task network outperforms existing multi-task architectures, and the auxiliary subitizing network provides strong guidance to salient object detection by reducing false positives and producing coherent saliency maps.
no code implementations • 28 Aug 2017 • Yibing Song, Linchao Bao, Shengfeng He, Qingxiong Yang, Ming-Hsuan Yang
We address the problem of transferring the style of a headshot photo to face images.
no code implementations • 1 Aug 2017 • Yibing Song, Jiawei Zhang, Shengfeng He, Linchao Bao, Qingxiong Yang
We propose a two-stage method for face hallucination.
no code implementations • CVPR 2017 • Liangqiong Qu, Jiandong Tian, Shengfeng He, Yandong Tang, Rynson W. H. Lau
Two levels of features are derived from the global network and transferred to two parallel networks.
no code implementations • 12 Jul 2016 • Liangqiong Qu, Shengfeng He, Jiawei Zhang, Jiandong Tian, Yandong Tang, Qingxiong Yang
Numerous efforts have been made to design different low level saliency cues for the RGBD saliency detection, such as color or depth contrast features, background and color compactness priors.
Ranked #24 on
RGB-D Salient Object Detection
on NJU2K
1 code implementation • CVPR 2016 • Wei-Chih Tu, Shengfeng He, Qingxiong Yang, Shao-Yi Chien
In this paper, we present a real-time salient object detection system based on the minimum spanning tree.
Ranked #5 on
Video Salient Object Detection
on MCL
(using extra training data)
no code implementations • CVPR 2016 • Shengfeng He, Rynson W. H. Lau, Qingxiong Yang
To address it, we design a two-stage deep model to learn the intra-class association between the exemplars and query objects.
no code implementations • ICCV 2015 • Shengfeng He, Rynson W. H. Lau
In this paper, we propose a new approach to generate oriented object proposals (OOPs) to reduce the detection error caused by various orientations of the object.
no code implementations • CVPR 2013 • Shengfeng He, Qingxiong Yang, Rynson W. H. Lau, Jiang Wang, Ming-Hsuan Yang
A robust tracking framework based on the locality sensitive histograms is proposed, which consists of two main components: a new feature for tracking that is robust to illumination changes and a novel multi-region tracking algorithm that runs in realtime even with hundreds of regions.