no code implementations • 13 Sep 2019 • Qing Yang, Jiachen Mao, Zuoguan Wang, Hai Li
In addition to conventional compression techniques, e. g., weight pruning and quantization, removing unimportant activations can reduce the amount of data communication and the computation cost.
no code implementations • 12 Sep 2019 • Chang Song, Zuoguan Wang, Hai Li
Recent research studies revealed that neural networks are vulnerable to adversarial attacks.
no code implementations • 19 Jun 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Hai Li
With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency.
no code implementations • ICLR 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li
With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.
no code implementations • CVPR 2013 • Yue Wu, Zuoguan Wang, Qiang Ji
To handle pose variations, the frontal face shape prior model is incorporated into a 3-way RBM model that could capture the relationship between frontal face shapes and non-frontal face shapes.
no code implementations • NeurIPS 2012 • Zuoguan Wang, Siwei Lyu, Gerwin Schalk, Qiang Ji
In this work, we describe a new learning scheme for parametric learning, in which the target variables $\y$ can be modeled with a prior model $p(\y)$ and the relations between data and target variables are estimated through $p(\y)$ and a set of uncorresponded data $\x$ in training.