no code implementations • ECCV 2020 • Wenyu Sun, Chen Tang, Weigui Li, Zhuqing Yuan, Huazhong Yang, Yongpan Liu
This paper proposes a deep video compression method to simultaneously encode multiple frames with Frame-Conv3D and differential modulation.
no code implementations • 23 Nov 2022 • Guodong Yin, Mufeng Zhou, Yiming Chen, Wenjun Tang, Zekun Yang, Mingyen Lee, Xirui Du, Jinshan Yue, Jiaxin Liu, Huazhong Yang, Yongpan Liu, Xueqing Li
Performing data-intensive tasks in the von Neumann architecture is challenging to achieve both high performance and power efficiency due to the memory wall bottleneck.
no code implementations • 31 Oct 2022 • Ruoyang Liu, Chenhan Wei, Yixiong Yang, Wenxun Wang, Huazhong Yang, Yongpan Liu
Data quantization is an effective method to accelerate neural network training and reduce power consumption.
no code implementations • 5 Sep 2022 • Xiaoyu Feng, Heming Du, Yueqi Duan, Yongpan Liu, Hehe Fan
Effectively preserving and encoding structure features from objects in irregular and sparse LiDAR points is a key challenge to 3D object detection on point cloud.
no code implementations • 2 Feb 2021 • Guodong Yin, Yi Cai, Juejian Wu, Zhengyang Duan, Zhenhua Zhu, Yongpan Liu, Yu Wang, Huazhong Yang, Xueqing Li
Compute-in-memory (CiM) is a promising approach to alleviating the memory wall problem for domain-specific applications.
Emerging Technologies
no code implementations • 21 Oct 2020 • Chen Tang, Wenyu Sun, Zhuqing Yuan, Yongpan Liu
To accelerate deep CNN models, this paper proposes a novel spatially adaptive framework that can dynamically generate pixel-wise sparsity according to the input image.
no code implementations • 7 Jun 2020 • Xiaoyu Feng, Zhuqing Yuan, Guijin Wang, Yongpan Liu
For example, the model is first pruned on the cloud and then transferred from cloud to end by UDA.
2 code implementations • 23 Mar 2019 • Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang
A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results.