Search Results for author: Lu Lu

Found 30 papers, 15 papers with code

Inclusion in CSR Reports: The Lens from a Data-Driven Machine Learning Model

no code implementations CSRNLP (LREC) 2022 Lu Lu, Jinghang Gu, Chu-Ren Huang

Inclusion, as one of the foundations in the diversity, equity, and inclusion initiative, concerns the degree of being treated as an ingroup member in a workplace.

Fourier-MIONet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration

no code implementations8 Mar 2023 Zhongyi Jiang, Min Zhu, Dongzhuo Li, Qiuzi Li, Yanhua O. Yuan, Lu Lu

Here, we develop a Fourier-enhanced multiple-input neural operator (Fourier-MIONet) to learn the solution operator of the problem of multiphase flow in porous media.

Reliable extrapolation of deep neural operators informed by physics or sparse observations

1 code implementation13 Dec 2022 Min Zhu, Handi Zhang, Anran Jiao, George Em Karniadakis, Lu Lu

Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks.

A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks

2 code implementations21 Jul 2022 Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, Lu Lu

Hence, we have considered a total of 10 different sampling methods, including six non-adaptive uniform sampling, uniform sampling with resampling, two proposed adaptive sampling, and an existing adaptive sampling.

Predicting parametric spatiotemporal dynamics by multi-resolution PDE structure-preserved deep learning

1 code implementation9 May 2022 Xin-Yang Liu, Hao Sun, Min Zhu, Lu Lu, Jian-Xun Wang

A more promising way is to leverage our prior physics knowledge in scientific deep learning models, known as physics-informed deep learning (PiDL).

Multifidelity deep neural operators for efficient learning of partial differential equations with application to fast inverse design of nanoscale heat transport

2 code implementations14 Apr 2022 Lu Lu, Raphael Pestourie, Steven G. Johnson, Giuseppe Romano

Deep neural operators can learn operators mapping between infinite-dimensional function spaces via deep neural networks and have become an emerging paradigm of scientific machine learning.

Conjugate Gradient Adaptive Learning with Tukey's Biweight M-Estimate

no code implementations19 Mar 2022 Lu Lu, Yi Yu, Rodrigo C. de Lamare, Xiaomin Yang

We propose a novel M-estimate conjugate gradient (CG) algorithm, termed Tukey's biweight M-estimate CG (TbMCG), for system identification in impulsive noise environments.

MIONet: Learning multiple-input operators via tensor product

2 code implementations12 Feb 2022 Pengzhan Jin, Shuai Meng, Lu Lu

Based on our theory and a low-rank approximation, we propose a novel neural operator, MIONet, to learn multiple-input operators.

Systems Biology: Identifiability analysis and parameter identification via systems-biology informed neural networks

2 code implementations3 Feb 2022 Mitchell Daneker, Zhen Zhang, George Em Karniadakis, Lu Lu

The dynamics of systems biological processes are usually modeled by a system of ordinary differential equations (ODEs) with many unknown parameters that need to be inferred from noisy and sparse measurements.

Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems

2 code implementations1 Nov 2021 Jeremy Yu, Lu Lu, Xuhui Meng, George Em Karniadakis

We tested gPINNs extensively and demonstrated the effectiveness of gPINNs in both forward and inverse PDE problems.

Active noise control techniques for nonlinear systems

no code implementations19 Oct 2021 Lu Lu, Kai-Li Yin, Rodrigo C. de Lamare, Zongsheng Zheng, Yi Yu, Xiaomin Yang, Badong Chen

Most of the literature focuses on the development of the linear active noise control (ANC) techniques.

A survey on active noise control techniques -- Part I: Linear systems

no code implementations1 Oct 2021 Lu Lu, Kai-Li Yin, Rodrigo C. de Lamare, Zongsheng Zheng, Yi Yu, Xiaomin Yang, Badong Chen

Active noise control (ANC) is an effective way for reducing the noise level in electroacoustic or electromechanical systems.

Study of Proximal Normalized Subband Adaptive Algorithm for Acoustic Echo Cancellation

no code implementations14 Aug 2021 Gang Guo, Yi Yu, Rodrigo C. de Lamare, Zongsheng Zheng, Lu Lu, Qiangming Cai

In addition, an adaptive approach for the choice of the thresholding parameter in the proximal step is also proposed based on the minimization of the mean square deviation.

Acoustic echo cancellation

One-shot learning for solution operators of partial differential equations

no code implementations6 Apr 2021 Anran Jiao, Haiyang He, Rishikesh Ranade, Jay Pathak, Lu Lu

Discovering governing equations of a physical system, represented by partial differential equations (PDEs), from data is a central challenge in a variety of areas of science and engineering.

One-Shot Learning Operator learning

Physics-informed neural networks with hard constraints for inverse design

3 code implementations9 Feb 2021 Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, Steven G. Johnson

We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique.

Mixed graphs with smallest eigenvalue greater than $-\frac{\sqrt{5}+1}{2}$

no code implementations24 Dec 2020 Lu Lu, ZhenZhen Lou

The classical problem of characterizing the graphs with bounded eigenvalues may date back to the work of Smith in 1970.

Combinatorics 05C50

Operator learning for predicting multiscale bubble growth dynamics

no code implementations23 Dec 2020 Chensen Lin, Zhen Li, Lu Lu, Shengze Cai, Martin Maxey, George Em Karniadakis

Simulating and predicting multiscale problems that couple multiple physics and dynamics across many orders of spatiotemporal scales is a great challenge that has not been investigated systematically by deep neural networks (DNNs).

Computational Physics

Point Adversarial Self Mining: A Simple Method for Facial Expression Recognition

no code implementations26 Aug 2020 Ping Liu, Yuewei Lin, Zibo Meng, Lu Lu, Weihong Deng, Joey Tianyi Zhou, Yi Yang

In this paper, we propose a simple yet effective approach, named Point Adversarial Self Mining (PASM), to improve the recognition accuracy in facial expression recognition.

Adversarial Attack Data Augmentation +2

Physics-informed neural networks for inverse problems in nano-optics and metamaterials

1 code implementation2 Dec 2019 Yuyao Chen, Lu Lu, George Em. Karniadakis, Luca Dal Negro

In this paper we employ the emerging paradigm of physics-informed neural networks (PINNs) for the solution of representative inverse scattering problems in photonic metamaterials and nano-optics technologies.

Computational Physics Optics

DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators

2 code implementations8 Oct 2019 Lu Lu, Pengzhan Jin, George Em. Karniadakis

This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data.

DeepXDE: A deep learning library for solving differential equations

5 code implementations10 Jul 2019 Lu Lu, Xuhui Meng, Zhiping Mao, George E. Karniadakis

We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering.

Gated Multiple Feedback Network for Image Super-Resolution

1 code implementation9 Jul 2019 Qilei Li, Zhen Li, Lu Lu, Gwanggil Jeon, Kai Liu, Xiaomin Yang

The rapid development of deep learning (DL) has driven single image super-resolution (SR) into a new era.

Image Super-Resolution

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

1 code implementation27 May 2019 Pengzhan Jin, Lu Lu, Yifa Tang, George Em. Karniadakis

To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness.

Dying ReLU and Initialization: Theory and Numerical Examples

no code implementations15 Mar 2019 Lu Lu, Yeonjong Shin, Yanhui Su, George Em. Karniadakis

Numerical examples are provided to demonstrate the effectiveness of the new initialization procedure.

How to Host a Data Competition: Statistical Advice for Design and Analysis of a Data Competition

no code implementations16 Jan 2019 Christine M. Anderson-Cook, Kary L. Myers, Lu Lu, Michael L. Fugate, Kevin R. Quinlan, Norma Pawley

It also describes a post-competition analysis that enables robust and efficient assessment of the strengths and weaknesses of solutions from different competitors, as well as greater understanding of the regions of the input space that are well-solved.

Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems

no code implementations21 Sep 2018 Dongkun Zhang, Lu Lu, Ling Guo, George Em. Karniadakis

Here, we propose a new method with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i. e., the parametric uncertainty and the approximation uncertainty.

Active Learning

Collapse of Deep and Narrow Neural Nets

1 code implementation ICLR 2019 Lu Lu, Yanhui Su, George Em. Karniadakis

However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability.

Cannot find the paper you are looking for? You can Submit a new open access paper.