no code implementations • 3 Sep 2024 • Yifei Yang, Zikai Huang, Chenshu Xu, Shengfeng He
This method seamlessly integrates static spatial information with interpretable temporal dynamics, transcending the limitations of existing network architectures and motion sequence content types.
no code implementations • 23 Aug 2024 • Yangyang Xu, Wenqi Shao, Yong Du, Haiming Zhu, Yang Zhou, Ping Luo, Shengfeng He
Recent advancements in text-guided diffusion models have unlocked powerful image manipulation capabilities, yet balancing reconstruction fidelity and editability for real images remains a significant challenge.
1 code implementation • 18 Aug 2024 • Xinjie Jiang, Chenxi Zheng, Xuemiao Xu, Bangzhen Liu, Weiying Zheng, Huaidong Zhang, Shengfeng He
Video Visual Relation Detection (VidVRD) focuses on understanding how entities interact over time and space in videos, a key step for gaining deeper insights into video scenes beyond basic visual tasks.
1 code implementation • 18 Aug 2024 • Haoxin Yang, Xuemiao Xu, Cheng Xu, Huaidong Zhang, Jing Qin, Yi Wang, Pheng-Ann Heng, Shengfeng He
This paper introduces G\textsuperscript{2}Face, which leverages both generative and geometric priors to enhance identity manipulation, achieving high-quality reversible face anonymization without compromising data utility.
no code implementations • 24 Jul 2024 • Yi Lei, Huilin Zhu, Jingling Yuan, Guangli Xiang, Xian Zhong, Shengfeng He
Drone-based crowd tracking faces difficulties in accurately identifying and monitoring objects from an aerial perspective, largely due to their small size and close proximity to each other, which complicates both localization and tracking.
1 code implementation • 6 Jul 2024 • Huilin Zhu, Jingling Yuan, Zhengwei Yang, Yu Guo, Zheng Wang, Xian Zhong, Shengfeng He
Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations.
1 code implementation • 5 Jul 2024 • Yu Guo, Yuan Gao, Yuxu Lu, Huilin Zhu, Ryan Wen Liu, Shengfeng He
In real-world scenarios, image impairments often manifest as composite degradations, presenting a complex interplay of elements such as low light, haze, rain, and snow.
1 code implementation • CVPR 2024 • Haofeng Liu, Chenshu Xu, Yifei Yang, Lihua Zeng, Shengfeng He
Point-based interactive editing serves as an essential tool to complement the controllability of existing generative models.
1 code implementation • CVPR 2024 • YingJie Xu, Bangzhen Liu, Hao Tang, Bailin Deng, Shengfeng He
We propose a voxel-based optimization framework, ReVoRF, for few-shot radiance fields that strategically address the unreliability in pseudo novel view synthesis.
1 code implementation • CVPR 2024 • Guanzhou Ke, Bo wang, Xiaoli Wang, Shengfeng He
To this end, we propose an innovative framework for multi-view representation learning, which incorporates a technique we term 'distilled disentangling'.
1 code implementation • 14 Mar 2024 • Xiao Ma, Shengfeng He, Hezhe Qiao, Dong Ma
Enabling efficient and accurate deep neural network (DNN) inference on microcontrollers is non-trivial due to the constrained on-chip resources.
no code implementations • 19 Feb 2024 • Haiming Zhu, Yangyang Xu, Shengfeng He
In this paper, we present QueryWarp, a novel framework for temporally coherent human motion video translation.
1 code implementation • CVPR 2024 • Yuyang Yu, Bangzhen Liu, Chenxi Zheng, Xuemiao Xu, Huaidong Zhang, Shengfeng He
In this paper we delve into a novel aspect of learning novel diffusion conditions with datasets an order of magnitude smaller.
no code implementations • CVPR 2024 • Yi Xie, Yihong Lin, Wenjie Cai, Xuemiao Xu, Huaidong Zhang, Yong Du, Shengfeng He
Existing methods for asymmetric image retrieval employ a rigid pairwise similarity constraint between the query network and the larger gallery network.
no code implementations • 22 Nov 2023 • Yangyang Xu, Shengfeng He, Wenqi Shao, Kwan-Yee K. Wong, Yu Qiao, Ping Luo
In this paper, we introduce DiffusionMat, a novel image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes.
no code implementations • 22 Oct 2023 • Yong Du, Jiahui Zhan, Shengfeng He, Xinzhe Li, Junyu Dong, Sheng Chen, Ming-Hsuan Yang
In this paper, we propose a novel translation model, UniTranslator, for transforming representations between visually distinct domains under conditions of limited training data and significant visual differences.
no code implementations • 16 Sep 2023 • Xin Jiang, Hao Tang, Junyao Gao, Xiaoyu Du, Shengfeng He, Zechao Li
In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model.
Ranked #7 on Fine-Grained Image Classification on NABirds
1 code implementation • 23 Aug 2023 • Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing Qin, Shengfeng He
Non-photorealistic videos are in demand with the wave of the metaverse, but lack of sufficient research studies.
no code implementations • ICCV 2023 • Yangyang Xu, Shengfeng He, Kwan-Yee K. Wong, Ping Luo
In this paper, we propose a unified recurrent framework, named \textbf{R}ecurrent v\textbf{I}deo \textbf{G}AN \textbf{I}nversion and e\textbf{D}iting (RIGID), to explicitly and simultaneously enforce temporally coherent GAN inversion and facial editing of real videos.
1 code implementation • 10 Aug 2023 • Huilin Zhu, Jingling Yuan, Xian Zhong, Zhengwei Yang, Zheng Wang, Shengfeng He
Domain adaptation is commonly employed in crowd counting to bridge the domain gaps between different datasets.
1 code implementation • 3 Aug 2023 • Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Shengfeng He
In this paper, we propose a novel multi-view representation disentangling method that aims to go beyond inductive biases, ensuring both interpretability and generalizability of the resulting representations.
no code implementations • 19 Apr 2023 • Yang Zhou, Hanjie Wu, Wenxi Liu, Zheng Xiong, Jing Qin, Shengfeng He
In this way, the challenging novel view synthesis process is decoupled into two simpler problems of stereo synthesis and 3D reconstruction.
1 code implementation • 17 Apr 2023 • Yu Guo, Yuan Gao, Ryan Wen Liu, Yuxu Lu, Jingxiang Qu, Shengfeng He, Wenqi Ren
The presence of non-homogeneous haze can cause scene blurring, color distortion, low contrast, and other degradations that obscure texture details.
1 code implementation • CVPR 2023 • Yu Zheng, Jiahui Zhan, Shengfeng He, Junyu Dong, Yong Du
In this paper, we propose a novel curricular contrastive regularization targeted at a consensual contrastive space as opposed to a non-consensual one.
Ranked #4 on Image Dehazing on SOTS Indoor
no code implementations • CVPR 2023 • Yi Xie, Huaidong Zhang, Xuemiao Xu, Jianqing Zhu, Shengfeng He
Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training.
no code implementations • ICCV 2023 • Yuhui Quan, Haoran Huang, Shengfeng He, Ruotao Xu
Removing moire patterns from videos recorded on screens or complex textures is known as video demoireing.
1 code implementation • ICCV 2023 • Weiying Zheng, Cheng Xu, Xuemiao Xu, Wenxi Liu, Shengfeng He
Video inpainting aims at filling in missing regions of a video.
1 code implementation • CVPR 2023 • Chenxi Zheng, Bangzhen Liu, Huaidong Zhang, Xuemiao Xu, Shengfeng He
The rationale behind is that we aim to locate a centroid latent position in a conditional StyleGAN, where the corresponding output image on that centroid can maximize the similarity with the given samples.
1 code implementation • ICCV 2023 • Yutao Jiang, Yang Zhou, Yuan Liang, Wenxi Liu, Jianbo Jiao, Yuhui Quan, Shengfeng He
To address the above issues, we propose Diffuse3D which employs a pre-trained diffusion model for global synthesis, while amending the model to activate depth-aware inference.
no code implementations • 15 Nov 2022 • Wenxi Liu, Qi Li, Weixiang Yang, Jiaxin Cai, Yuanlong Yu, Yuexin Ma, Shengfeng He, Jia Pan
We propose a front-to-top view projection (FTVP) module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding.
1 code implementation • 13 Aug 2022 • Yangyang Xu Zeyang Zhou, Shengfeng He
Particularly, we invert an input portrait into the latent code of StyleGAN, and our aim is to discover whether there is an enhanced version in the latent space which is more compatible with a reference matting model.
1 code implementation • 17 Jul 2022 • Haorui Song, Yong Du, Tianyi Xiang, Junyu Dong, Jing Qin, Shengfeng He
Consequently, in the decomposition phase, we further present a GAN prior based deghosting network for separating the final fine edited image from the coarse reconstruction.
1 code implementation • CVPR 2022 • Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie
More notably, our SDMP is the first method that successfully leverages data mixing to improve (rather than hurt) the performance of Vision Transformers in the self-supervised setting.
no code implementations • 29 May 2022 • Zheng Xiong, Liangyu Chai, Wenxi Liu, Yongtuo Liu, Sucheng Ren, Shengfeng He
To enable training under this new setting, we convert the crowd count regression problem to a ranking potential prediction problem.
1 code implementation • CVPR 2022 • Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, Shengfeng He
Although previous research can leverage generative priors to produce high-resolution results, their quality can suffer from the entangled semantics of the latent space.
1 code implementation • CVPR 2022 • Zhixuan Zhong, Liangyu Chai, Yang Zhou, Bailin Deng, Jia Pan, Shengfeng He
This paper presents a Generative prior ReciprocAted Invertible rescaling Network (GRAIN) for generating faithful high-resolution (HR) images from low-resolution (LR) invertible images with an extreme upscaling factor (64x).
1 code implementation • CVPR 2022 • Sucheng Ren, Daquan Zhou, Shengfeng He, Jiashi Feng, Xinchao Wang
This novel merging scheme enables the self-attention to learn relationships between objects with different sizes and simultaneously reduces the token numbers and the computational cost.
1 code implementation • ICCV 2021 • Wenxi Liu, Qi Li, Xindai Lin, Weixiang Yang, Shengfeng He, Yuanlong Yu
In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations.
Ranked #2 on Land Cover Classification on DeepGlobe
no code implementations • 6 Aug 2021 • Yongtuo Liu, Sucheng Ren, Liangyu Chai, Hanjie Wu, Jing Qin, Dan Xu, Shengfeng He
In this way, we can transfer the original spatial labeling redundancy caused by individual similarities to effective supervision signals on the unlabeled regions.
no code implementations • 6 Aug 2021 • Yongtuo Liu, Dan Xu, Sucheng Ren, Hanjie Wu, Hongmin Cai, Shengfeng He
To this end, we propose to untangle \emph{domain-invariant} crowd and \emph{domain-specific} background from crowd images and design a fine-grained domain adaption method for crowd counting.
1 code implementation • 5 Aug 2021 • Sucheng Ren, Qiang Wen, Nanxuan Zhao, Guoqiang Han, Shengfeng He
In this paper, we introduce a new attention-based encoder, vision transformer, into salient object detection to ensure the globalization of the representations from shallow to deep layers.
2 code implementations • ICCV 2021 • Yangyang Xu, Yong Du, Wenpeng Xiao, Xuemiao Xu, Shengfeng He
This inborn property is used for two unique purposes: 1) regularizing the joint inversion process, such that each of the inverted code is semantically accessible from one of the other and fastened in a editable domain; 2) enforcing inter-image coherence, such that the fidelity of each inverted code can be maximized with the complement of other images.
no code implementations • CVPR 2022 • Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, Hang Zhao
Transformers recently are adapted from the community of natural language processing as a promising substitute of convolution-based neural networks for visual learning tasks.
1 code implementation • CVPR 2021 • Sucheng Ren, Wenxi Liu, Yongtuo Liu, Haoxin Chen, Guoqiang Han, Shengfeng He
Additionally, to exclude the information of the moving background objects from motion features, our transformation module enables to reciprocally transform the appearance features to enhance the motion features, so as to focus on the moving objects with salient appearance while removing the co-moving outliers.
Ranked #12 on Unsupervised Video Object Segmentation on DAVIS 2016 val
1 code implementation • CVPR 2021 • Haoxin Chen, Hanjie Wu, Nanxuan Zhao, Sucheng Ren, Shengfeng He
The key is to model the relationship between the query videos and the support images for propagating the object information.
no code implementations • CVPR 2021 • Sucheng Ren, Yong Du, Jianming Lv, Guoqiang Han, Shengfeng He
To these ends, we introduce a trainable "master" network which ingests both audio signals and silent lip videos instead of a pretrained teacher.
1 code implementation • CVPR 2021 • Han Deng, Chu Han, Hongmin Cai, Guoqiang Han, Shengfeng He
In this paper, we take a different perspective to break down the makeup transfer problem into a two-step extraction-assignment process.
1 code implementation • CVPR 2021 • Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Yuexin Ma, Shengfeng He, Jia Pan
Furthermore, our model runs at 35 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
Autonomous Driving Monocular Cross-View Road Scene Parsing(Road) +2
1 code implementation • CVPR 2021 • Huiting Yang, Liangyu Chai, Qiang Wen, Shuang Zhao, Zixun Sun, Shengfeng He
In this way, arbitrary attributes can be edited by collecting positive data only, and the proposed method learns a controllable representation enabling manipulation of non-binary attributes like anime styles and facial characteristics.
1 code implementation • 19 Apr 2021 • Shengfeng He, Bing Peng, Junyu Dong, Yong Du
Shadow removal is an important yet challenging task in image processing and computer vision.
no code implementations • 31 Mar 2021 • Xin Yang, Yu Qiao, Shaozhe Chen, Shengfeng He, BaoCai Yin, Qiang Zhang, Xiaopeng Wei, Rynson W. H. Lau
Image matting is an ill-posed problem that usually requires additional user input, such as trimaps or scribbles.
2 code implementations • 31 Aug 2020 • Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Wei Liu, Yun-hui Liu
Specifically, given an unlabeled video clip, we compute a series of spatio-temporal statistical summaries, such as the spatial location and dominant direction of the largest motion, the spatial location and dominant color of the largest color diversity along the temporal axis, etc.
no code implementations • ECCV 2020 • Sucheng Ren, Chu Han, Xin Yang, Guoqiang Han, Shengfeng He
In this paper, we propose a simple yet effective approach, named Triple Excitation Network, to reinforce the training of video salient object detection (VSOD) from three aspects, spatial, temporal, and online excitations.
no code implementations • 9 Jun 2020 • Yuzhen Niu, Weifeng Shi, Wenxi Liu, Shengfeng He, Jia Pan, Antoni B. Chan
In this paper, we formulate a novel crowd analysis problem, in which we aim to predict the crowd distribution in the near future given sequential frames of a crowd video without any identity annotations.
1 code implementation • CVPR 2020 • Huaidong Zhang, Xuemiao Xu, Guoqiang Han, Shengfeng He
It avoids the heavy computation of exhaustively searching all the cycle lengths in the video, and, instead, it propagates the coarse prediction for further refinement in a hierarchical manner.
no code implementations • 30 Mar 2020 • Jianbo Jiao, Linchao Bao, Yunchao Wei, Shengfeng He, Honghui Shi, Rynson Lau, Thomas S. Huang
This can be naturally generalized to span multiple scales with a Laplacian pyramid representation of the input data.
no code implementations • ICCV 2019 • Xiaosheng Yan, Yuanlong Yu, Feigege Wang, Wenxi Liu, Shengfeng He, Jia Pan
We conduct comparison experiments on this dataset and demonstrate that our model outperforms the state-of-the-art in tasks of recovering segmentation mask and appearance for occluded vehicles.
1 code implementation • CVPR 2019 • Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Yun-hui Liu, Wei Liu
We conduct extensive experiments with C3D to validate the effectiveness of our proposed approach.
Ranked #47 on Self-Supervised Action Recognition on HMDB51
no code implementations • NeurIPS 2018 • Xin Yang, Ke Xu, Shaozhe Chen, Shengfeng He, Baocai Yin Yin, Rynson Lau
Our aim is to discover the most informative sequence of regions for user input in order to produce a good alpha matte with minimum labeling efforts.
no code implementations • 22 Nov 2018 • Yibing Song, Jiawei Zhang, Lijun Gong, Shengfeng He, Linchao Bao, Jinshan Pan, Qingxiong Yang, Ming-Hsuan Yang
We first propose a facial component guided deep Convolutional Neural Network (CNN) to restore a coarse face image, which is denoted as the base image where the facial component is automatically generated from the input face image.
no code implementations • 27 Sep 2018 • Wenxi Liu, Yibing Song, Dengsheng Chen, Shengfeng He, Yuanlong Yu, Tao Yan, Gerhard P. Hancke, Rynson W. H. Lau
In addition, we also propose a gated fusion scheme to control how the variations captured by the deformable convolution affect the original appearance.
no code implementations • 2 Apr 2018 • Xiaowei Hu, Xuemiao Xu, Yongjie Xiao, Hao Chen, Shengfeng He, Jing Qin, Pheng-Ann Heng
Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales.
no code implementations • 10 Nov 2017 • Shao Huang, Weiqiang Wang, Shengfeng He, Rynson W. H. Lau
Egocentric videos, which mainly record the activities carried out by the users of the wearable cameras, have drawn much research attentions in recent years.
no code implementations • ICCV 2017 • Shengfeng He, Jianbo Jiao, Xiaodan Zhang, Guoqiang Han, Rynson W. H. Lau
Experiments show that the proposed multi-task network outperforms existing multi-task architectures, and the auxiliary subitizing network provides strong guidance to salient object detection by reducing false positives and producing coherent saliency maps.
no code implementations • 28 Aug 2017 • Yibing Song, Linchao Bao, Shengfeng He, Qingxiong Yang, Ming-Hsuan Yang
We address the problem of transferring the style of a headshot photo to face images.
no code implementations • 1 Aug 2017 • Yibing Song, Jiawei Zhang, Shengfeng He, Linchao Bao, Qingxiong Yang
We propose a two-stage method for face hallucination.
3 code implementations • CVPR 2017 • Liangqiong Qu, Jiandong Tian, Shengfeng He, Yandong Tang, Rynson W. H. Lau
Two levels of features are derived from the global network and transferred to two parallel networks.
no code implementations • 12 Jul 2016 • Liangqiong Qu, Shengfeng He, Jiawei Zhang, Jiandong Tian, Yandong Tang, Qingxiong Yang
Numerous efforts have been made to design different low level saliency cues for the RGBD saliency detection, such as color or depth contrast features, background and color compactness priors.
Ranked #26 on RGB-D Salient Object Detection on NJU2K
no code implementations • CVPR 2016 • Shengfeng He, Rynson W. H. Lau, Qingxiong Yang
To address it, we design a two-stage deep model to learn the intra-class association between the exemplars and query objects.
1 code implementation • CVPR 2016 • Wei-Chih Tu, Shengfeng He, Qingxiong Yang, Shao-Yi Chien
In this paper, we present a real-time salient object detection system based on the minimum spanning tree.
Ranked #5 on Video Salient Object Detection on MCL (using extra training data)
no code implementations • ICCV 2015 • Shengfeng He, Rynson W. H. Lau
In this paper, we propose a new approach to generate oriented object proposals (OOPs) to reduce the detection error caused by various orientations of the object.
no code implementations • CVPR 2013 • Shengfeng He, Qingxiong Yang, Rynson W. H. Lau, Jiang Wang, Ming-Hsuan Yang
A robust tracking framework based on the locality sensitive histograms is proposed, which consists of two main components: a new feature for tracking that is robust to illumination changes and a novel multi-region tracking algorithm that runs in realtime even with hundreds of regions.