1 code implementation • 24 Jun 2024 • Pingchuan Ma, HaoYu Yang, Zhengqi Gao, Duane S. Boning, Jiaqi Gu
However, FDTD is known for its prohibitive runtime cost, taking minutes to hours to simulate a single device.
no code implementations • 29 Oct 2023 • Zhenggqi Gao, Dinghuai Zhang, Luca Daniel, Duane S. Boning
Next, it estimates the rare event probability by utilizing importance sampling in conjunction with the last proposal.
no code implementations • 24 Oct 2023 • Zhengqi Gao, Fan-Keng Sun, Ron Rohrer, Duane S. Boning
Essentially, KirchhoffNet is an analog circuit that can function as a neural network, utilizing its initial node voltages as the neural network input and the node voltages at a specific time point as the output.
1 code implementation • 19 Sep 2022 • Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray T. Chen, Duane S. Boning, David Z. Pan
In this work, for the first time, a physics-agnostic neural operator-based framework, dubbed NeurOLight, is proposed to learn a family of frequency-domain Maxwell PDEs for ultra-fast parametric photonic device simulation.
1 code implementation • 22 Jul 2022 • Zhengqi Gao, Fan-Keng Sun, Mingran Yang, Sucheng Ren, Zikai Xiong, Marc Engeler, Antonio Burazer, Linda Wildling, Luca Daniel, Duane S. Boning
Data lies at the core of modern deep learning.
no code implementations • 24 May 2022 • Fan-Keng Sun, Duane S. Boning
Finally, we validate that the frequency domain is indeed better by comparing univariate models trained in the frequency v. s.
1 code implementation • NeurIPS 2021 • Fan-Keng Sun, Christopher I. Lang, Duane S. Boning
A common assumption in training neural networks via maximum likelihood estimation on time series is that the errors across time steps are uncorrelated.
no code implementations • 2 Mar 2020 • Kyongmin Yeo, Dylan E. C. Grullon, Fan-Keng Sun, Duane S. Boning, Jayant R. Kalagnanam
Unlike the classical variational inference, where a factorized distribution is used to approximate the posterior, we employ a feedforward neural network supplemented by an encoder recurrent neural network to develop a more flexible probabilistic model.