Search Results for author: Haitong Huang

Found 6 papers, 1 papers with code

Cross-Layer Optimization for Fault-Tolerant Deep Learning

no code implementations21 Dec 2023 Qing Zhang, Cheng Liu, Bo Liu, Haitong Huang, Ying Wang, Huawei Li, Xiaowei Li

Fault-tolerant deep learning accelerator is the basis for highly reliable deep learning processing and critical to deploy deep learning in safety-critical applications such as avionics and robotics.

Bayesian Optimization Quantization

Exploring Winograd Convolution for Cost-effective Neural Network Fault Tolerance

no code implementations16 Aug 2023 Xinghua Xue, Cheng Liu, Bo Liu, Haitong Huang, Ying Wang, Tao Luo, Lei Zhang, Huawei Li, Xiaowei Li

When it is applied on fault-tolerant neural networks enhanced with fault-aware retraining and constrained activation functions, the resulting model accuracy generally shows significant improvement in presence of various faults.

Computational Efficiency

Deep Learning Accelerator in Loop Reliability Evaluation for Autonomous Driving

no code implementations20 Jun 2023 Haitong Huang, Cheng Liu

The reliability of deep learning accelerators (DLAs) used in autonomous driving systems has significant impact on the system safety.

Autonomous Driving

MRFI: An Open Source Multi-Resolution Fault Injection Framework for Neural Network Processing

1 code implementation20 Jun 2023 Haitong Huang, Cheng Liu, Bo Liu, Xinghua Xue, Huawei Li, Xiaowei Li

It enables users to modify an independent fault configuration file rather than neural network models for the fault injection and vulnerability analysis.

Statistical Modeling of Soft Error Influence on Neural Networks

no code implementations12 Oct 2022 Haitong Huang, Xinghua Xue, Cheng Liu, Ying Wang, Tao Luo, Long Cheng, Huawei Li, Xiaowei Li

Prior work mainly rely on fault simulation to analyze the influence of soft errors on NN processing.

Quantization

Winograd Convolution: A Perspective from Fault Tolerance

no code implementations17 Feb 2022 Xinghua Xue, Haitong Huang, Cheng Liu, Ying Wang, Tao Luo, Lei Zhang

Winograd convolution is originally proposed to reduce the computing overhead by converting multiplication in neural network (NN) with addition via linear transformation.

Cannot find the paper you are looking for? You can Submit a new open access paper.