Search Results for author: David Z. Pan

Found 38 papers, 20 papers with code

An Efficient Training Framework for Reversible Neural Architectures

no code implementations ECCV 2020 Zixuan Jiang, Keren Zhu, Mingjie Liu, Jiaqi Gu, David Z. Pan

In this work, we formulate the decision problem for reversible operators with training time as the objective function and memory usage as the constraint.

PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices

1 code implementation5 Nov 2024 Hanqing Zhu, Wenyan Cong, Guojin Chen, Shupeng Ning, Ray T. Chen, Jiaqi Gu, David Z. Pan

In this work, we boost the prediction fidelity to an unprecedented level for simulating complex photonic devices with a novel operator design driven by the above challenges.

Operator learning

Differentiable Edge-based OPC

no code implementations16 Aug 2024 Guojin Chen, HaoYu Yang, Haoxing Ren, Bei Yu, David Z. Pan

Optical proximity correction (OPC) is crucial for pushing the boundaries of semiconductor manufacturing and enabling the continued scaling of integrated circuits.

INSIGHT: Universal Neural Simulator for Analog Circuits Harnessing Autoregressive Transformers

no code implementations10 Jul 2024 Souradip Poddar, Youngmin Oh, Yao Lai, Hanqing Zhu, Bosun Hwang, David Z. Pan

However, efficient and effective exploration of the vast and complex design space remains constrained by the time-consuming nature of SPICE simulations, making effective design automation a challenging endeavor.

LLM-Enhanced Bayesian Optimization for Efficient Analog Layout Constraint Generation

1 code implementation7 Jun 2024 Guojin Chen, Keren Zhu, Seunggeun Kim, Hanqing Zhu, Yao Lai, Bei Yu, David Z. Pan

Analog layout synthesis faces significant challenges due to its dependence on manual processes, considerable time requirements, and performance instability.

Bayesian Optimization Few-Shot Learning

AnalogCoder: Analog Circuit Design via Training-Free Code Generation

1 code implementation23 May 2024 Yao Lai, Sungyoung Lee, Guojin Chen, Souradip Poddar, Mengkang Hu, David Z. Pan, Ping Luo

Analog circuit design is a significant task in modern chip technology, focusing on the selection of component types, connectivity, and parameters to ensure proper circuit functionality.

Code Generation

Scalable and Effective Arithmetic Tree Generation for Adder and Multiplier Designs

1 code implementation10 May 2024 Yao Lai, Jinxin Liu, David Z. Pan, Ping Luo

We believe our work will offer valuable insights into hardware design, further accelerating speed and reducing size through the refined search space and our tree generation methodologies.

Computational Efficiency Navigate

Subgraph Extraction-based Feedback-guided Iterative Scheduling for HLS

no code implementations22 Jan 2024 Hanchen Ye, David Z. Pan, Chris Leary, Deming Chen, Xiaoqing Xu

This paper proposes ISDC, a novel feedback-guided iterative system of difference constraints (SDC) scheduling algorithm for high-level synthesis (HLS).

Scheduling

QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits

1 code implementation10 Jan 2024 Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z. Pan, Frederic T. Chong, Song Han, Zhangyang Wang

To address these two pain points, we propose QuantumSEA, an in-time sparse exploration for noise-adaptive quantum circuits, aiming to achieve two key objectives: (1) implicit circuits capacity during training - by dynamically exploring the circuit's sparse connectivity and sticking a fixed small number of quantum gates throughout the training which satisfies the coherence time and enjoy light noises, enabling feasible executions on real quantum devices; (2) noise robustness - by jointly optimizing the topology and parameters of quantum circuits under real device noise models.

Quantum Machine Learning

Transformer-QEC: Quantum Error Correction Code Decoding with Transferable Transformers

no code implementations27 Nov 2023 Hanrui Wang, Pengyu Liu, Kevin Shao, Dantong Li, Jiaqi Gu, David Z. Pan, Yongshan Ding, Song Han

Quantum Error Correction (QEC) mitigates this by employing redundancy, distributing quantum information across multiple data qubits and utilizing syndrome qubits to monitor their states for errors.

Decoder Transfer Learning

Practical Layout-Aware Analog/Mixed-Signal Design Automation with Bayesian Neural Networks

no code implementations27 Nov 2023 Ahmet F. Budak, Keren Zhu, David Z. Pan

Our efficient algorithm solves the post-layout performance optimization problem where simulations are known to be expensive.

RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training

no code implementations27 Nov 2023 Hanrui Wang, Yilian Liu, Pengyu Liu, Jiaqi Gu, Zirui Li, Zhiding Liang, Jinglei Cheng, Yongshan Ding, Xuehai Qian, Yiyu Shi, David Z. Pan, Frederic T. Chong, Song Han

Arbitrary state preparation algorithms can be broadly categorized into arithmetic decomposition (AD) and variational quantum state preparation (VQSP).

M3ICRO: Machine Learning-Enabled Compact Photonic Tensor Core based on PRogrammable Multi-Operand Multimode Interference

1 code implementation31 May 2023 Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan

The programmable MOMMI leverages the intrinsic light propagation principle, providing a single-device programmable matrix unit beyond the conventional computing paradigm of one multiply-accumulate (MAC) operation per device.

Integrated multi-operand optical neurons for scalable and hardware-efficient deep learning

no code implementations31 May 2023 Chenghao Feng, Jiaqi Gu, Hanqing Zhu, Rongxing Tang, Shupeng Ning, May Hlaing, Jason Midkiff, Sourabh Jain, David Z. Pan, Ray T. Chen

The optical neural network (ONN) is a promising hardware platform for next-generation neuromorphic computing due to its high parallelism, low latency, and low energy consumption.

Pre-RMSNorm and Pre-CRMSNorm Transformers: Equivalent and Efficient Pre-LN Transformers

1 code implementation NeurIPS 2023 Zixuan Jiang, Jiaqi Gu, Hanqing Zhu, David Z. Pan

Experiments demonstrate that we can reduce the training and inference time of Pre-LN Transformers by 1% - 10%.

QuEst: Graph Transformer for Quantum Circuit Reliability Estimation

1 code implementation30 Oct 2022 Hanrui Wang, Pengyu Liu, Jinglei Cheng, Zhiding Liang, Jiaqi Gu, Zirui Li, Yongshan Ding, Weiwen Jiang, Yiyu Shi, Xuehai Qian, David Z. Pan, Frederic T. Chong, Song Han

Specifically, the TorchQuantum library also supports using data-driven ML models to solve problems in quantum system research, such as predicting the impact of quantum noise on circuit fidelity and improving the quantum circuit compilation efficiency.

NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation

1 code implementation19 Sep 2022 Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray T. Chen, Duane S. Boning, David Z. Pan

In this work, for the first time, a physics-agnostic neural operator-based framework, dubbed NeurOLight, is proposed to learn a family of frequency-domain Maxwell PDEs for ultra-fast parametric photonic device simulation.

Delving into Effective Gradient Matching for Dataset Condensation

1 code implementation30 Jul 2022 Zixuan Jiang, Jiaqi Gu, Mingjie Liu, David Z. Pan

In this work, we delve into the gradient matching method from a comprehensive perspective and answer the critical questions of what, how, and where to match.

Dataset Condensation

QOC: Quantum On-Chip Training with Parameter Shift and Gradient Pruning

1 code implementation26 Feb 2022 Hanrui Wang, Zirui Li, Jiaqi Gu, Yongshan Ding, David Z. Pan, Song Han

Nevertheless, we find that due to the significant quantum errors (noises) on real machines, gradients obtained from naive parameter shift have low fidelity and thus degrading the training accuracy.

Image Classification

ELight: Enabling Efficient Photonic In-Memory Neurocomputing with Life Enhancement

no code implementations15 Dec 2021 Hanqing Zhu, Jiaqi Gu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen, David Z. Pan

With the recent advances in optical phase change material (PCM), photonic in-memory neurocomputing has demonstrated its superiority in optical neural network (ONN) designs with near-zero static power consumption, time-of-light latency, and compact footprint.

A compact butterfly-style silicon photonic-electronic neural chip for hardware-efficient deep learning

1 code implementation11 Nov 2021 Chenghao Feng, Jiaqi Gu, Hanqing Zhu, Zhoufeng Ying, Zheng Zhao, David Z. Pan, Ray T. Chen

The optical neural network (ONN) is a promising hardware platform for next-generation neurocomputing due to its high parallelism, low latency, and low energy consumption.

Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation

1 code implementation CVPR 2022 Jiaqi Gu, Hyoukjun Kwon, Dilin Wang, Wei Ye, Meng Li, Yu-Hsin Chen, Liangzhen Lai, Vikas Chandra, David Z. Pan

Therefore, we propose HRViT, which enhances ViTs to learn semantically-rich and spatially-precise multi-scale representations by integrating high-resolution multi-branch architectures with ViTs.

Image Classification Representation Learning +3

L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization

1 code implementation NeurIPS 2021 Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan

In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning.

QuantumNAT: Quantum Noise-Aware Training with Noise Injection, Quantization and Normalization

2 code implementations21 Oct 2021 Hanrui Wang, Jiaqi Gu, Yongshan Ding, Zirui Li, Frederic T. Chong, David Z. Pan, Song Han

Furthermore, to improve the robustness against noise, we propose noise injection to the training process by inserting quantum error gates to PQC according to realistic noise models of quantum hardware.

Denoising Quantization

DNN-Opt: An RL Inspired Optimization for Analog Circuit Sizing using Deep Neural Networks

no code implementations1 Oct 2021 Ahmet F. Budak, Prateek Bhansali, Bo Liu, Nan Sun, David Z. Pan, Chandramouli V. Kashyap

The key contributions of this paper are a novel sample-efficient two-stage deep learning optimization framework leveraging RL actor-critic algorithms, and a recipe to extend it on large industrial circuits using critical device identification.

Reinforcement Learning (RL)

Towards Efficient On-Chip Training of Quantum Neural Networks

no code implementations29 Sep 2021 Hanrui Wang, Zirui Li, Jiaqi Gu, Yongshan Ding, David Z. Pan, Song Han

The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks.

Image Classification

QuantumNAS: Noise-Adaptive Search for Robust Quantum Circuits

2 code implementations22 Jul 2021 Hanrui Wang, Yongshan Ding, Jiaqi Gu, Zirui Li, Yujun Lin, David Z. Pan, Frederic T. Chong, Song Han

Extensively evaluated with 12 QML and VQE benchmarks on 14 quantum computers, QuantumNAS significantly outperforms baselines.

Optimizer Fusion: Efficient Training with Better Locality and Parallelism

no code implementations1 Apr 2021 Zixuan Jiang, Jiaqi Gu, Mingjie Liu, Keren Zhu, David Z. Pan

Machine learning frameworks adopt iterative optimizers to train neural networks.

SqueezeLight: Towards Scalable Optical Neural Networks with Multi-Operand Ring Resonators

1 code implementation IEEE Design, Automation & Test in Europe Conference & Exhibition (DATE) 2021 Jiaqi Gu, Chenghao Feng, Zheng Zhao, Zhoufeng Ying, Mingjie Liu, Ray T. Chen, David Z. Pan

Optical neural networks (ONNs) have demonstrated promising potentials for next-generation artificial intelligence acceleration with ultra-low latency, high bandwidth, and low energy consumption.

Efficient On-Chip Learning for Optical Neural Networks Through Power-Aware Sparse Zeroth-Order Optimization

1 code implementation21 Dec 2020 Jiaqi Gu, Chenghao Feng, Zheng Zhao, Zhoufeng Ying, Ray T. Chen, David Z. Pan

Optical neural networks (ONNs) have demonstrated record-breaking potential in high-performance neuromorphic computing due to their ultra-high execution speed and low energy consumption.

PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training

no code implementations ICLR 2018 Meng Li, Liangzhen Lai, Naveen Suda, Vikas Chandra, David Z. Pan

Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints.

General Classification Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.