Search Results for author: Zheyu Yan

Found 15 papers, 5 papers with code

FL-NAS: Towards Fairness of NAS for Resource Constrained Devices via Large Language Models

no code implementations9 Feb 2024 Ruiyang Qin, Yuting Hu, Zheyu Yan, JinJun Xiong, Ahmed Abbasi, Yiyu Shi

Neural Architecture Search (NAS) has become the de fecto tools in the industry in automating the design of deep neural networks for various applications, especially those driven by mobile and edge devices with limited computing resources.

Fairness Neural Architecture Search

U-SWIM: Universal Selective Write-Verify for Computing-in-Memory Neural Accelerators

no code implementations11 Dec 2023 Zheyu Yan, Xiaobo Sharon Hu, Yiyu Shi

In our research, we illustrate that only a small fraction of weights need this write-verify treatment for the corresponding devices and the DNN accuracy can be preserved, yielding a notable programming acceleration.

Compute-in-Memory based Neural Network Accelerators for Safety-Critical Systems: Worst-Case Scenarios and Protections

no code implementations11 Dec 2023 Zheyu Yan, Xiaobo Sharon Hu, Yiyu Shi

In this study, we define the problem of pinpointing the worst-case performance of CiM DNN accelerators affected by device variations.

Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise

no code implementations29 Jul 2023 Zheyu Yan, Yifan Qin, Wujie Wen, Xiaobo Sharon Hu, Yiyu Shi

In this work, we propose to use the k-th percentile performance (KPP) to capture the realistic worst-case performance of DNN models executing on CiM accelerators.

Self-Driving Cars

On the Viability of using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators

no code implementations12 Jun 2023 Zheyu Yan, Yifan Qin, Xiaobo Sharon Hu, Yiyu Shi

In this study, we present a novel approach that leverages Large Language Models (LLMs) to address this issue.

Negative Feedback Training: A Novel Concept to Improve Robustness of NVCIM DNN Accelerators

1 code implementation23 May 2023 Yifan Qin, Zheyu Yan, Wujie Wen, Xiaobo Sharon Hu, Yiyu Shi

However, the stochastic nature and intrinsic variations of NVM devices often result in performance degradation in DNN inference.

Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?

no code implementations15 Jul 2022 Zheyu Yan, Xiaobo Sharon Hu, Yiyu Shi

In this work, we formulate the problem of determining the worst-case performance of CiM DNN accelerators under the impact of device variations.

A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators

1 code implementation25 Mar 2022 Bingqian Lu, Zheyu Yan, Yiyu Shi, Shaolei Ren

We first perform neural architecture search to obtain a small set of optimal architectures for one accelerator candidate.

Neural Architecture Search

SWIM: Selective Write-Verify for Computing-in-Memory Neural Accelerators

1 code implementation17 Feb 2022 Zheyu Yan, Xiaobo Sharon Hu, Yiyu Shi

In this work, we show that it is only necessary to select a small portion of the weights for write-verify to maintain the DNN accuracy, thus achieving significant speedup.

RADARS: Memory Efficient Reinforcement Learning Aided Differentiable Neural Architecture Search

no code implementations13 Sep 2021 Zheyu Yan, Weiwen Jiang, Xiaobo Sharon Hu, Yiyu Shi

To the best of the authors' knowledge, this is the first DNAS framework that can handle large search spaces with bounded memory usage.

Neural Architecture Search reinforcement-learning +1

Uncertainty Modeling of Emerging Device-based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

no code implementations6 Jul 2021 Zheyu Yan, Da-Cheng Juan, Xiaobo Sharon Hu, Yiyu Shi

Emerging device-based Computing-in-memory (CiM) has been proved to be a promising candidate for high-energy efficiency deep neural network (DNN) computations.

Neural Architecture Search

Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

no code implementations10 Feb 2020 Lei Yang, Zheyu Yan, Meng Li, Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra, Weiwen Jiang, Yiyu Shi

Neural Architecture Search (NAS) has demonstrated its power on various AI accelerating platforms such as Field Programmable Gate Arrays (FPGAs) and Graphic Processing Units (GPUs).

Neural Architecture Search

Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

no code implementations31 Oct 2019 Weiwen Jiang, Qiuwen Lou, Zheyu Yan, Lei Yang, Jingtong Hu, Xiaobo Sharon Hu, Yiyu Shi

In this paper, we are the first to bring the computing-in-memory architecture, which can easily transcend the memory wall, to interplay with the neural architecture search, aiming to find the most efficient neural architectures with high network accuracy and maximized hardware efficiency.

Neural Architecture Search

When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies

1 code implementation10 Sep 2019 Zheyu Yan, Yiyu Shi, Wang Liao, Masanori Hashimoto, Xichuan Zhou, Cheng Zhuo

We are then able to analytically explore the weakness of a network and summarize the key findings for the impact of SIPP on different types of bits in a floating point parameter, layer-wise robustness within the same network and impact of network depth.

Cannot find the paper you are looking for? You can Submit a new open access paper.