Search Results for author: He Wen

Found 7 papers, 4 papers with code

Modeling Clothing as a Separate Layer for an Animatable Human Avatar

no code implementations28 Jun 2021 Donglai Xiang, Fabian Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu

To address these difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.

InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image

2 code implementations ECCV 2020 Gyeongsik Moon, Shoou-I Yu, He Wen, Takaaki Shiratori, Kyoung Mu Lee

Therefore, we firstly propose (1) a large-scale dataset, InterHand2. 6M, and (2) a baseline network, InterNet, for 3D interacting hand pose estimation from a single RGB image.

3D Hand Pose Estimation

Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks

no code implementations22 Jun 2017 Shuchang Zhou, Yuzhi Wang, He Wen, Qinyao He, Yuheng Zou

Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both Convolutional Neural Networks and Recurrent Neural Networks.

Quantization

Training Bit Fully Convolutional Network for Fast Semantic Segmentation

no code implementations1 Dec 2016 He Wen, Shuchang Zhou, Zhe Liang, Yuxiang Zhang, Dieqiao Feng, Xinyu Zhou, Cong Yao

Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation.

Semantic Segmentation

Effective Quantization Methods for Recurrent Neural Networks

2 code implementations30 Nov 2016 Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou, Yuheng Zou

In addition, we propose balanced quantization methods for weights to further reduce performance degradation.

Quantization

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

12 code implementations20 Jun 2016 Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou

We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.