Search Results for author: Jing Shao

Found 32 papers, 16 papers with code

Few-Shot Domain Expansion for Face Anti-Spoofing

no code implementations27 Jun 2021 Bowen Yang, Jing Zhang, Zhenfei Yin, Jing Shao

In practice, given a handful of labeled samples from a new deployment scenario (target domain) and abundant labeled face images in the existing source domain, the FAS system is expected to perform well in the new scenario without sacrificing the performance on the original domain.

Face Anti-Spoofing Face Recognition +1

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

1 code implementation CVPR 2021 Yinan He, Bei Gan, Siyu Chen, Yichun Zhou, Guojun Yin, Luchuan Song, Lu Sheng, Jing Shao, Ziwei Liu

To counter this emerging threat, we construct the ForgeryNet dataset, an extremely large face forgery dataset with unified annotations in image- and video-level data across four tasks: 1) Image Forgery Classification, including two-way (real / fake), three-way (real / fake with identity-replaced forgery approaches / fake with identity-remained forgery approaches), and n-way (real and 15 respective forgery approaches) classification.

Classification General Classification

PV-NAS: Practical Neural Architecture Search for Video Recognition

no code implementations2 Nov 2020 ZiHao Wang, Chen Lin, Lu Sheng, Junjie Yan, Jing Shao

Recently, deep learning has been utilized to solve video recognition problem due to its prominent representation ability.

Neural Architecture Search Video Recognition

Thinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues

2 code implementations ECCV 2020 Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, Jing Shao

As realistic facial manipulation technologies have achieved remarkable progress, social concerns about potential malicious abuse of these technologies bring out an emerging research topic of face forgery detection.

1st place solution for AVA-Kinetics Crossover in AcitivityNet Challenge 2020

1 code implementation16 Jun 2020 Siyu Chen, Junting Pan, Guanglu Song, Manyuan Zhang, Hao Shao, Ziyi Lin, Jing Shao, Hongsheng Li, Yu Liu

This technical report introduces our winning solution to the spatio-temporal action localization track, AVA-Kinetics Crossover, in ActivityNet Challenge 2020.

Spatio-Temporal Action Localization Temporal Action Localization

Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization

1 code implementation CVPR 2021 Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, Hongsheng Li

We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context.

Action Detection Spatio-Temporal Action Localization +2

Morphing and Sampling Network for Dense Point Cloud Completion

2 code implementations30 Nov 2019 Minghua Liu, Lu Sheng, Sheng Yang, Jing Shao, Shi-Min Hu

3D point cloud completion, the task of inferring the complete geometric shape from a partial point cloud, has been attracting attention in the community.

Point Cloud Completion

Context and Attribute Grounded Dense Captioning

no code implementations CVPR 2019 Guojun Yin, Lu Sheng, Bin Liu, Nenghai Yu, Xiaogang Wang, Jing Shao

Dense captioning aims at simultaneously localizing semantic regions and describing these regions-of-interest (ROIs) with short phrases or sentences in natural language.

Video Generation from Single Semantic Label Map

2 code implementations CVPR 2019 Junting Pan, Chengyu Wang, Xu Jia, Jing Shao, Lu Sheng, Junjie Yan, Xiaogang Wang

This paper proposes the novel task of video generation conditioned on a SINGLE semantic label map, which provides a good balance between flexibility and quality in the generation process.

Image Generation Optical Flow Estimation +1

Unsupervised Bi-directional Flow-based Video Generation from one Snapshot

no code implementations3 Mar 2019 Lu Sheng, Junting Pan, Jiaming Guo, Jing Shao, Xiaogang Wang, Chen Change Loy

Imagining multiple consecutive frames given one single snapshot is challenging, since it is difficult to simultaneously predict diverse motions from a single image and faithfully generate novel frames without visual distortions.

Video Generation

Improving Referring Expression Grounding with Cross-modal Attention-guided Erasing

no code implementations CVPR 2019 Xihui Liu, ZiHao Wang, Jing Shao, Xiaogang Wang, Hongsheng Li

Referring expression grounding aims at locating certain objects or persons in an image with a referring expression, where the key challenge is to comprehend and align various types of information from visual and textual domain, such as visual attributes, location and interactions with surrounding regions.

Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection

1 code implementation16 Sep 2018 Yongcheng Liu, Lu Sheng, Jing Shao, Junjie Yan, Shiming Xiang, Chunhong Pan

Specifically, given the image-level annotations, (1) we first develop a weakly-supervised detection (WSD) model, and then (2) construct an end-to-end multi-label image classification framework augmented by a knowledge distillation module that guides the classification model by the WSD model according to the class-level predictions for the whole image and the object-level visual features for object RoIs.

Classification General Classification +3

Transductive Centroid Projection for Semi-supervised Large-scale Recognition

no code implementations ECCV 2018 Yu Liu, Guanglu Song, Jing Shao, Xiao Jin, Xiaogang Wang

It is inspired by the observation of the weights in classification layer (called extit{anchors}) converge to the central direction of each class in hyperspace.

General Classification

Localization Guided Learning for Pedestrian Attribute Recognition

no code implementations28 Aug 2018 Pengze Liu, Xihui Liu, Junjie Yan, Jing Shao

Pedestrian attribute recognition has attracted many attentions due to its wide applications in scene understanding and person analysis from surveillance videos.

Pedestrian Attribute Recognition Scene Understanding

BlockQNN: Efficient Block-wise Neural Network Architecture Generation

2 code implementations16 Aug 2018 Zhao Zhong, Zichen Yang, Boyang Deng, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu

The block-wise generation brings unique advantages: (1) it yields state-of-the-art results in comparison to the hand-crafted networks on image classification, particularly, the best network generated by BlockQNN achieves 2. 35% top-1 error rate on CIFAR-10.

Image Classification Q-Learning

Zoom-Net: Mining Deep Feature Interactions for Visual Relationship Recognition

no code implementations ECCV 2018 Guojun Yin, Lu Sheng, Bin Liu, Nenghai Yu, Xiaogang Wang, Jing Shao, Chen Change Loy

We show that by encouraging deep message propagation and interactions between local object features and global predicate features, one can achieve compelling performance in recognizing complex relationships without using any linguistic priors.

Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration

3 code implementations CVPR 2018 Lu Sheng, Ziyi Lin, Jing Shao, Xiaogang Wang

Zero-shot artistic style transfer is an important image synthesis problem aiming at transferring arbitrary style into content images.

Image Generation Image Reconstruction +1

Slicing Convolutional Neural Network for Crowd Video Understanding

no code implementations CVPR 2016 Jing Shao, Chen-Change Loy, Kai Kang, Xiaogang Wang

Learning and capturing both appearance and dynamic representations are pivotal for crowd video understanding.

Video Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.