no code implementations • 12 Jul 2022 • Hao liu, Bin Chen, Bo wang, Chunpeng Wu, Feng Dai, Peng Wu
To address the coupling problem, we propose a Cycle Self-Training (CST) framework for SSOD, which consists of two teachers T1 and T2, two students S1 and S2.
no code implementations • 24 May 2020 • Ang Li, Chunpeng Wu, Yiran Chen, Bin Ni
Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones.
no code implementations • 17 Feb 2020 • Huijie Feng, Chunpeng Wu, Guoyang Chen, Weifeng Zhang, Yang Ning
In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier.
no code implementations • 25 Sep 2019 • Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li
As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.
no code implementations • 6 Dec 2018 • Jingyang Zhang, Hsin-Pai Cheng, Chunpeng Wu, Hai Li, Yiran Chen
We intuitively and empirically prove the rationality of our method in reducing the search space.
1 code implementation • 21 May 2018 • Wei Wen, Yandan Wang, Feng Yan, Cong Xu, Chunpeng Wu, Yiran Chen, Hai Li
It becomes an open question whether escaping sharp minima can improve the generalization.
no code implementations • 27 May 2017 • Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen
Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.
1 code implementation • NeurIPS 2017 • Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients.
5 code implementations • ICCV 2017 • Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.
no code implementations • CVPR 2017 • Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, Hai Li
Our DNN has 4. 1M parameters, which is only 6. 7% of AlexNet or 59% of GoogLeNet.
3 code implementations • NeurIPS 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.
no code implementations • 3 Apr 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Kent Nixon, Qing Wu, Mark Barnell, Hai Li, Yiran Chen
IBM TrueNorth chip uses digital spikes to perform neuromorphic computing and achieves ultrahigh execution parallelism and power efficiency.