no code implementations • 27 Jun 2021 • Yang-tian Sun, Hao-Zhi Huang, Xuan Wang, Yu-Kun Lai, Wei Liu, Lin Gao
Moreover, we introduce a concise temporal loss in the training stage to suppress the detail flickering that is made more visible due to high-quality dynamic details generated by our method.
no code implementations • 30 Nov 2020 • Risheng Huang, Li Shen, Xuan Wang, Cheng Lin, Hao-Zhi Huang
This paper proposes an adaptive compact attention model for few-shot video-to-video translation.
no code implementations • 3 Jul 2020 • Meng Cao, Hao-Zhi Huang, Hao Wang, Xuan Wang, Li Shen, Sheng Wang, Linchao Bao, Zhifeng Li, Jiebo Luo
Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.
no code implementations • 16 Jun 2020 • Jie An, Tao Li, Hao-Zhi Huang, Li Shen, Xuan Wang, Yongyi Tang, Jinwen Ma, Wei Liu, Jiebo Luo
Extracting effective deep features to represent content and style information is the key to universal style transfer.
no code implementations • 21 May 2020 • Yucong Shen, Li Shen, Hao-Zhi Huang, Xuan Wang, Wei Liu
Recent advances in deep neural networks (DNNs) lead to tremendously growing network parameters, making the deployments of DNNs on platforms with limited resources extremely difficult.
no code implementations • 29 Apr 2020 • Congliang Chen, Li Shen, Hao-Zhi Huang, Wei Liu
In this paper, we present a distributed variant of adaptive stochastic gradient method for training deep neural networks in the parameter-server model.
no code implementations • 19 Nov 2019 • Yingru Liu, Xuewen Yang, Dongliang Xie, Xin Wang, Li Shen, Hao-Zhi Huang, Niranjan Balasubramanian
In this paper, we propose a novel deep learning model called Task Adaptive Activation Network (TAAN) that can automatically learn the optimal network architecture for MTL.
no code implementations • 12 Aug 2019 • Kun Cheng, Hao-Zhi Huang, Chun Yuan, Lingyiqing Zhou, Wei Liu
Specifically, we transfer the motion of one person in a target video to another person in a source video, while preserving the appearance of the source person.
no code implementations • 6 May 2019 • Sen-Zhe Xu, Hao-Zhi Huang, Shi-Min Hu, Wei Liu
On the basis of the FaceShapeGene, a novel part-wise face image editing system is developed, which contains a shape-remix network and a conditional label-to-face transformer.
1 code implementation • 5 Sep 2018 • Hao-Zhi Huang, Senzhe Xu, Junxiong Cai, Wei Liu, Shi-Min Hu
Since existing video datasets which have ground-truth foreground masks and optical flows are not sufficiently large, we propose a simple yet efficient method to build up a synthetic dataset supporting supervised training of the proposed adversarial network.
no code implementations • ECCV 2018 • Minjun Li, Hao-Zhi Huang, Lin Ma, Wei Liu, Tong Zhang, Yu-Gang Jiang
Recent studies on unsupervised image-to-image translation have made a remarkable progress by training a pair of generative adversarial networks with a cycle-consistent loss.
7 code implementations • CVPR 2019 • Song-Hai Zhang, Rui-Long Li, Xin Dong, Paul L. Rosin, Zixi Cai, Han Xi, Dingcheng Yang, Hao-Zhi Huang, Shi-Min Hu
We demonstrate that our pose-based framework can achieve better accuracy than the state-of-art detection-based approach on the human instance segmentation problem, and can moreover better handle occlusion.
Ranked #1 on Human Instance Segmentation on OCHuman
no code implementations • CVPR 2017 • Hao-Zhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiaolong Zhu, Zhifeng Li, Wei Liu
More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames.