Search Results for author: Jiaqi Gu

Found 33 papers, 17 papers with code

An Efficient Training Framework for Reversible Neural Architectures

no code implementations ECCV 2020 Zixuan Jiang, Keren Zhu, Mingjie Liu, Jiaqi Gu, David Z. Pan

In this work, we formulate the decision problem for reversible operators with training time as the objective function and memory usage as the constraint.

Learning Neural Volumetric Pose Features for Camera Localization

no code implementations19 Mar 2024 Jingyu Lin, Jiaqi Gu, Bojian Wu, Lubin Fan, Renjie Chen, Ligang Liu, Jieping Ye

We introduce a novel neural volumetric pose feature, termed PoseMap, designed to enhance camera localization by encapsulating the information between images and the associated camera poses.

Camera Localization

DOCTOR: Dynamic On-Chip Remediation Against Temporally-Drifting Thermal Variations Toward Self-Corrected Photonic Tensor Accelerators

no code implementations5 Mar 2024 Haotian Lu, Sanmitra Banerjee, Jiaqi Gu

While off-chip noise-aware training and on-chip training have been proposed to enhance the variation tolerance of optical neural accelerators with moderate, static noises, we observe a notable performance degradation over time due to temporally drifting variations, which requires a real-time, in-situ calibration mechanism.

Edge-computing

TeMPO: Efficient Time-Multiplexed Dynamic Photonic Tensor Core for Edge AI with Compact Slow-Light Electro-Optic Modulator

no code implementations12 Feb 2024 Meng Zhang, Dennis Yin, Nicholas Gangi, Amir Begović, Alexander Chen, Zhaoran Rena Huang, Jiaqi Gu

Electronic-photonic computing systems offer immense potential in energy-efficient artificial intelligence (AI) acceleration tasks due to the superior computing speed and efficiency of optics, especially for real-time, low-energy deep neural network (DNN) inference tasks on resource-restricted edge platforms.

Quantization

QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits

1 code implementation10 Jan 2024 Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z. Pan, Frederic T. Chong, Song Han, Zhangyang Wang

To address these two pain points, we propose QuantumSEA, an in-time sparse exploration for noise-adaptive quantum circuits, aiming to achieve two key objectives: (1) implicit circuits capacity during training - by dynamically exploring the circuit's sparse connectivity and sticking a fixed small number of quantum gates throughout the training which satisfies the coherence time and enjoy light noises, enabling feasible executions on real quantum devices; (2) noise robustness - by jointly optimizing the topology and parameters of quantum circuits under real device noise models.

Quantum Machine Learning

DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting

no code implementations27 Nov 2023 Hanrui Wang, Pengyu Liu, Yilian Liu, Jiaqi Gu, Jonathan Baker, Frederic T. Chong, Song Han

By counting the occurrences of edges and edge pairs in decoded matchings, we can statistically estimate the up-to-date probabilities of each edge and the correlations between them.

RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training

no code implementations27 Nov 2023 Hanrui Wang, Yilian Liu, Pengyu Liu, Jiaqi Gu, Zirui Li, Zhiding Liang, Jinglei Cheng, Yongshan Ding, Xuehai Qian, Yiyu Shi, David Z. Pan, Frederic T. Chong, Song Han

Arbitrary state preparation algorithms can be broadly categorized into arithmetic decomposition (AD) and variational quantum state preparation (VQSP).

Transformer-QEC: Quantum Error Correction Code Decoding with Transferable Transformers

no code implementations27 Nov 2023 Hanrui Wang, Pengyu Liu, Kevin Shao, Dantong Li, Jiaqi Gu, David Z. Pan, Yongshan Ding, Song Han

Quantum Error Correction (QEC) mitigates this by employing redundancy, distributing quantum information across multiple data qubits and utilizing syndrome qubits to monitor their states for errors.

Transfer Learning

Integrated multi-operand optical neurons for scalable and hardware-efficient deep learning

no code implementations31 May 2023 Chenghao Feng, Jiaqi Gu, Hanqing Zhu, Rongxing Tang, Shupeng Ning, May Hlaing, Jason Midkiff, Sourabh Jain, David Z. Pan, Ray T. Chen

The optical neural network (ONN) is a promising hardware platform for next-generation neuromorphic computing due to its high parallelism, low latency, and low energy consumption.

M3ICRO: Machine Learning-Enabled Compact Photonic Tensor Core based on PRogrammable Multi-Operand Multimode Interference

1 code implementation31 May 2023 Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan

The programmable MOMMI leverages the intrinsic light propagation principle, providing a single-device programmable matrix unit beyond the conventional computing paradigm of one multiply-accumulate (MAC) operation per device.

Pre-RMSNorm and Pre-CRMSNorm Transformers: Equivalent and Efficient Pre-LN Transformers

1 code implementation NeurIPS 2023 Zixuan Jiang, Jiaqi Gu, Hanqing Zhu, David Z. Pan

Experiments demonstrate that we can reduce the training and inference time of Pre-LN Transformers by 1% - 10%.

QuEst: Graph Transformer for Quantum Circuit Reliability Estimation

1 code implementation30 Oct 2022 Hanrui Wang, Pengyu Liu, Jinglei Cheng, Zhiding Liang, Jiaqi Gu, Zirui Li, Yongshan Ding, Weiwen Jiang, Yiyu Shi, Xuehai Qian, David Z. Pan, Frederic T. Chong, Song Han

Specifically, the TorchQuantum library also supports using data-driven ML models to solve problems in quantum system research, such as predicting the impact of quantum noise on circuit fidelity and improving the quantum circuit compilation efficiency.

NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation

1 code implementation19 Sep 2022 Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray T. Chen, Duane S. Boning, David Z. Pan

In this work, for the first time, a physics-agnostic neural operator-based framework, dubbed NeurOLight, is proposed to learn a family of frequency-domain Maxwell PDEs for ultra-fast parametric photonic device simulation.

Delving into Effective Gradient Matching for Dataset Condensation

1 code implementation30 Jul 2022 Zixuan Jiang, Jiaqi Gu, Mingjie Liu, David Z. Pan

In this work, we delve into the gradient matching method from a comprehensive perspective and answer the critical questions of what, how, and where to match.

Dataset Condensation

RobustAnalog: Fast Variation-Aware Analog Circuit Design Via Multi-task RL

no code implementations13 Jul 2022 Wei Shi, Hanrui Wang, Jiaqi Gu, Mingjie Liu, David Pan, Song Han, Nan Sun

To address the challenge, we present RobustAnalog, a robust circuit design framework that involves the variation information in the optimization process.

Bayesian Optimization

CVFNet: Real-time 3D Object Detection by Learning Cross View Features

no code implementations13 Mar 2022 Jiaqi Gu, Zhiyu Xiang, Pan Zhao, Tingming Bai, Lingxuan Wang, Xijun Zhao, Zhiyuan Zhang

In recent years 3D object detection from LiDAR point clouds has made great progress thanks to the development of deep learning technologies.

3D Object Detection Object +1

QOC: Quantum On-Chip Training with Parameter Shift and Gradient Pruning

1 code implementation26 Feb 2022 Hanrui Wang, Zirui Li, Jiaqi Gu, Yongshan Ding, David Z. Pan, Song Han

Nevertheless, we find that due to the significant quantum errors (noises) on real machines, gradients obtained from naive parameter shift have low fidelity and thus degrading the training accuracy.

Image Classification

ELight: Enabling Efficient Photonic In-Memory Neurocomputing with Life Enhancement

no code implementations15 Dec 2021 Hanqing Zhu, Jiaqi Gu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen, David Z. Pan

With the recent advances in optical phase change material (PCM), photonic in-memory neurocomputing has demonstrated its superiority in optical neural network (ONN) designs with near-zero static power consumption, time-of-light latency, and compact footprint.

A compact butterfly-style silicon photonic-electronic neural chip for hardware-efficient deep learning

1 code implementation11 Nov 2021 Chenghao Feng, Jiaqi Gu, Hanqing Zhu, Zhoufeng Ying, Zheng Zhao, David Z. Pan, Ray T. Chen

The optical neural network (ONN) is a promising hardware platform for next-generation neurocomputing due to its high parallelism, low latency, and low energy consumption.

Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation

1 code implementation CVPR 2022 Jiaqi Gu, Hyoukjun Kwon, Dilin Wang, Wei Ye, Meng Li, Yu-Hsin Chen, Liangzhen Lai, Vikas Chandra, David Z. Pan

Therefore, we propose HRViT, which enhances ViTs to learn semantically-rich and spatially-precise multi-scale representations by integrating high-resolution multi-branch architectures with ViTs.

Image Classification Representation Learning +3

L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization

1 code implementation NeurIPS 2021 Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan

In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning.

QuantumNAT: Quantum Noise-Aware Training with Noise Injection, Quantization and Normalization

2 code implementations21 Oct 2021 Hanrui Wang, Jiaqi Gu, Yongshan Ding, Zirui Li, Frederic T. Chong, David Z. Pan, Song Han

Furthermore, to improve the robustness against noise, we propose noise injection to the training process by inserting quantum error gates to PQC according to realistic noise models of quantum hardware.

Denoising Quantization

Towards Efficient On-Chip Training of Quantum Neural Networks

no code implementations29 Sep 2021 Hanrui Wang, Zirui Li, Jiaqi Gu, Yongshan Ding, David Z. Pan, Song Han

The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks.

Image Classification

DenseLiDAR: A Real-Time Pseudo Dense Depth Guided Depth Completion Network

no code implementations28 Aug 2021 Jiaqi Gu, Zhiyu Xiang, Yuwen Ye, Lingxuan Wang

Depth Completion can produce a dense depth map from a sparse input and provide a more complete 3D description of the environment.

3D Object Detection Depth Completion +3

QuantumNAS: Noise-Adaptive Search for Robust Quantum Circuits

2 code implementations22 Jul 2021 Hanrui Wang, Yongshan Ding, Jiaqi Gu, Zirui Li, Yujun Lin, David Z. Pan, Frederic T. Chong, Song Han

Extensively evaluated with 12 QML and VQE benchmarks on 14 quantum computers, QuantumNAS significantly outperforms baselines.

Optimizer Fusion: Efficient Training with Better Locality and Parallelism

no code implementations1 Apr 2021 Zixuan Jiang, Jiaqi Gu, Mingjie Liu, Keren Zhu, David Z. Pan

Machine learning frameworks adopt iterative optimizers to train neural networks.

SqueezeLight: Towards Scalable Optical Neural Networks with Multi-Operand Ring Resonators

1 code implementation IEEE Design, Automation & Test in Europe Conference & Exhibition (DATE) 2021 Jiaqi Gu, Chenghao Feng, Zheng Zhao, Zhoufeng Ying, Mingjie Liu, Ray T. Chen, David Z. Pan

Optical neural networks (ONNs) have demonstrated promising potentials for next-generation artificial intelligence acceleration with ultra-low latency, high bandwidth, and low energy consumption.

Efficient On-Chip Learning for Optical Neural Networks Through Power-Aware Sparse Zeroth-Order Optimization

1 code implementation21 Dec 2020 Jiaqi Gu, Chenghao Feng, Zheng Zhao, Zhoufeng Ying, Ray T. Chen, David Z. Pan

Optical neural networks (ONNs) have demonstrated record-breaking potential in high-performance neuromorphic computing due to their ultra-high execution speed and low energy consumption.

Cannot find the paper you are looking for? You can Submit a new open access paper.