no code implementations • 22 Jan 2024 • Yuhao Ji, Chao Fang, Zhongfeng Wang
Existing binary Transformers are promising in edge deployment due to their compact model size, low computational complexity, and considerable inference accuracy.
no code implementations • 22 Sep 2023 • Chao Fang, Wei Sun, Aojun Zhou, Zhongfeng Wang
At the algorithm level, a bidirectional weight pruning method, dubbed BDWP, is proposed to leverage the N:M sparsity of weights during both forward and backward passes of DNN training, which can significantly reduce the computational cost while maintaining model accuracy.
no code implementations • 15 Sep 2023 • Longwei Huang, Chao Fang, Qiong Li, Jun Lin, Zhongfeng Wang
However, many edge devices struggle to boost inference throughput of various quantized DNNs due to the varying quantization levels, and these devices lack floating-point (FP) support for on-device learning, which prevents them from improving model accuracy while ensuring data privacy.
no code implementations • 1 Aug 2023 • Zhaoming Hu, Ruikang Zhong, Chao Fang, Yuanwei Liu
As long-term decision processes, the optimization problems based on independent and coupled phase-shift models of Caching-at-STARS contain both continuous and discrete decision variables, and are suitable for solving with deep reinforcement learning (DRL) algorithm.
no code implementations • 23 Mar 2023 • Trung Pham, Mehran Maghoumi, Wanli Jiang, Bala Siva Sashank Jujjavarapu, Mehdi Sajjadi, Xin Liu, Hsuan-Chu Lin, Bor-Jeng Chen, Giang Truong, Chao Fang, Junghyun Kwon, Minwoo Park
Achieving robust and real-time 3D perception is fundamental for autonomous vehicles.
1 code implementation • 28 Oct 2022 • Jiayi Tian, Chao Fang, Haonan Wang, Zhongfeng Wang
Pre-trained BERT models have achieved impressive accuracy on natural language processing (NLP) tasks.
no code implementations • 21 Oct 2022 • Mengdi Xu, Peide Huang, Yaru Niu, Visak Kumar, JieLin Qiu, Chao Fang, Kuan-Hui Lee, Xuewei Qi, Henry Lam, Bo Li, Ding Zhao
One key challenge for multi-task Reinforcement learning (RL) in practice is the absence of task indicators.
no code implementations • 12 Aug 2022 • Chao Fang, Aojun Zhou, Zhongfeng Wang
(1) From algorithm perspective, we propose a sparsity inheritance mechanism along with an inherited dynamic pruning (IDP) method to obtain a series of N:M sparse candidate Transformers rapidly.
no code implementations • 4 Jun 2021 • Chao Fang, Charitha Madapatha, Behrooz Makki, Tommy Svensson
Integrated access and backhaul (IAB) networks have the potential to provide high data rate in both access and backhaul networks by sharing the same spectrum.
no code implementations • 3 Aug 2020 • Kuan-Hui Lee, Matthew Kliemann, Adrien Gaidon, Jie Li, Chao Fang, Sudeep Pillai, Wolfram Burgard
In autonomous driving, accurately estimating the state of surrounding obstacles is critical for safe and robust path planning.
no code implementations • 13 Jan 2020 • Chao Fang, Behrooz Makki, Jingya Li, Tommy Svensson
Considering joint transmissions and BS silence strategy, we propose hybrid precoding algorithms which minimize the sum power consumption of the base stations (BSs), for both fully- and partially-connected hybrid precoding (FHP and PHP, respectively) schemes, for single-carrier and orthogonal frequency-division multiplexing systems.
no code implementations • CVPR 2020 • Rui Hou, Jie Li, Arjun Bhargava, Allan Raventos, Vitor Guizilini, Chao Fang, Jerome Lynch, Adrien Gaidon
Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution.
no code implementations • 6 Sep 2019 • Jinming Lu, Siyuan Lu, Zhisheng Wang, Chao Fang, Jun Lin, Zhongfeng Wang, Li Du
With the increasing size of Deep Neural Network (DNN) models, the high memory space requirements and computational complexity have become an obstacle for efficient DNN implementations.
no code implementations • 12 Sep 2017 • Chao Fang, Yi Shang, Dong Xu
Results: Here, a very deep neural network, the deep inception-inside-inception networks (Deep3I), is proposed for protein secondary structure prediction and a software tool was implemented using this network.