Ambiguity Adaptive Inference and Single-shot based Channel Pruning for Satellite Processing Environments

29 Sep 2021  ·  Minsu Jeon, Kyungno Joo, Changha Lee, Taewoo Kim, SeongHwan Kim, Chan-Hyun Youn ·

In a restricted computing environment like satellite on-board systems, running DL models has limitation on high-speed processing due to the problems such as restriction of available power to consume compared to the relatively high computational complexity. In particular, the latest GPU resources shows high computing performance but also shows relatively high power consumption, whereas in restricted environments such as satellite systems, reconfigurable resources like FPGA or low power embedded GPU are generally adopted due to their relatively low power consumption compared to computing capability. In such a constrained computing environment, in order to overcome the problem of too huge model size to fit in reconfigurable resources or limitation on high-speed processing, we propose a reconfigurable DL accelerating system where the computing complexity and size of DL model are compressed by pruning and can be adapted to the FPGA or low power GPU resources. Therefore, in this paper, we mainly address an ambiguity adaptive inference model that can enhance overall accuracy in inference step directly for mission critical task, a new method for single-shot based channel pruning that can accelerate inference of DL model through compressing the model as much as possible with maintaining accuracy performance under constrained accelerator resources. From the experimental evaluation, for the satellite image analysis model as an example application, our method can achieve up to x8.53 compression while keeping the accuracy, and verified that our method can deploy and accelerate the DL model with high computational complexity on FPGA/GPU resources.

PDF Abstract
No code implementations yet. Submit your code now



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.