Search Results for author: Chunpeng Wu

Found 12 papers, 4 papers with code

Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting

no code implementations12 Jul 2022 Hao liu, Bin Chen, Bo wang, Chunpeng Wu, Feng Dai, Peng Wu

To address the coupling problem, we propose a Cycle Self-Training (CST) framework for SSOD, which consists of two teachers T1 and T2, two students S1 and S2.

object-detection Object Detection +1

MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones

no code implementations24 May 2020 Ang Li, Chunpeng Wu, Yiran Chen, Bin Ni

Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones.

Federated Learning Optical Flow Estimation +2

Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

no code implementations17 Feb 2020 Huijie Feng, Chunpeng Wu, Guoyang Chen, Weifeng Zhang, Yang Ning

In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier.

Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30% Less High-quality Data for Training

no code implementations25 Sep 2019 Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li

As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.

Generative Adversarial Network Image Generation

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

no code implementations27 May 2017 Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen

Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.

Coordinating Filters for Faster Deep Neural Networks

5 code implementations ICCV 2017 Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.

Learning Structured Sparsity in Deep Neural Networks

3 code implementations NeurIPS 2016 Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.

Cannot find the paper you are looking for? You can Submit a new open access paper.