Search Results for author: Yinghao Xu

Found 12 papers, 7 papers with code

Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering

no code implementations ICCV 2021 Bangbang Yang, yinda zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, Zhaopeng Cui

In this paper, we present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering with editing capability for a clustered and real-world scene.

Neural Rendering Novel View Synthesis

CompConv: A Compact Convolution Module for Efficient Feature Learning

no code implementations19 Jun 2021 Chen Zhang, Yinghao Xu, Yujun Shen

Convolutional Neural Networks (CNNs) have achieved remarkable success in various computer vision tasks but rely on tremendous computational cost.

Data-Efficient Instance Generation from Instance Discrimination

1 code implementation8 Jun 2021 Ceyuan Yang, Yujun Shen, Yinghao Xu, Bolei Zhou

Meanwhile, the learned instance discrimination capability from the discriminator is in turn exploited to encourage the generator for diverse generation.

Data Augmentation Image Generation

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

2 code implementations CVPR 2021 Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou

To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.

Novel View Synthesis Representation Learning

Generative Hierarchical Features from Synthesizing Images

1 code implementation CVPR 2021 Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, Bolei Zhou

Generative Adversarial Networks (GANs) have recently advanced image synthesis by learning the underlying distribution of the observed data.

Face Verification Image Classification +2

Unsupervised Landmark Learning from Unpaired Data

1 code implementation29 Jun 2020 Yinghao Xu, Ceyuan Yang, Ziwei Liu, Bo Dai, Bolei Zhou

Recent attempts for unsupervised landmark learning leverage synthesized image pairs that are similar in appearance but different in poses.

Video Representation Learning with Visual Tempo Consistency

1 code implementation28 Jun 2020 Ceyuan Yang, Yinghao Xu, Bo Dai, Bolei Zhou

Visual tempo, which describes how fast an action goes, has shown its potential in supervised action recognition.

Action Anticipation Action Detection +3

Temporal Pyramid Network for Action Recognition

3 code implementations CVPR 2020 Ceyuan Yang, Yinghao Xu, Jianping Shi, Bo Dai, Bolei Zhou

Previous works often capture the visual tempo through sampling raw videos at multiple rates and constructing an input-level frame pyramid, which usually requires a costly multi-branch network to handle.

Action Recognition

Dense RepPoints: Representing Visual Objects with Dense Point Sets

2 code implementations ECCV 2020 Ze Yang, Yinghao Xu, Han Xue, Zheng Zhang, Raquel Urtasun, Li-Wei Wang, Stephen Lin, Han Hu

We present a new object representation, called Dense RepPoints, that utilizes a large set of points to describe an object at multiple levels, including both box level and pixel level.

Object Detection

A Main/Subsidiary Network Framework for Simplifying Binary Neural Networks

no code implementations CVPR 2019 Yinghao Xu, Xin Dong, Yudian Li, Hao Su

To reduce memory footprint and run-time latency, techniques such as neural net-work pruning and binarization have been explored separately.

Binarization Image Classification

A Main/Subsidiary Network Framework for Simplifying Binary Neural Network

no code implementations11 Dec 2018 Yinghao Xu, Xin Dong, Yudian Li, Hao Su

To reduce memory footprint and run-time latency, techniques such as neural network pruning and binarization have been explored separately.

Binarization Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.