6 code implementations • 10 Jul 2019 • Lu Lu, Xuhui Meng, Zhiping Mao, George E. Karniadakis
We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering.
1 code implementation • 2 Dec 2019 • Yuyao Chen, Lu Lu, George Em. Karniadakis, Luca Dal Negro
In this paper we employ the emerging paradigm of physics-informed neural networks (PINNs) for the solution of representative inverse scattering problems in photonic metamaterials and nano-optics technologies.
Computational Physics Optics
4 code implementations • 9 Feb 2021 • Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, Steven G. Johnson
We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique.
2 code implementations • 1 Nov 2021 • Jeremy Yu, Lu Lu, Xuhui Meng, George Em Karniadakis
We tested gPINNs extensively and demonstrated the effectiveness of gPINNs in both forward and inverse PDE problems.
2 code implementations • 12 Feb 2022 • Pengzhan Jin, Shuai Meng, Lu Lu
Based on our theory and a low-rank approximation, we propose a novel neural operator, MIONet, to learn multiple-input operators.
2 code implementations • 14 Apr 2022 • Lu Lu, Raphael Pestourie, Steven G. Johnson, Giuseppe Romano
Deep neural operators can learn operators mapping between infinite-dimensional function spaces via deep neural networks and have become an emerging paradigm of scientific machine learning.
2 code implementations • 21 Jul 2022 • Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, Lu Lu
Hence, we have considered a total of 10 different sampling methods, including six non-adaptive uniform sampling, uniform sampling with resampling, two proposed adaptive sampling, and an existing adaptive sampling.
1 code implementation • 13 Dec 2022 • Min Zhu, Handi Zhang, Anran Jiao, George Em Karniadakis, Lu Lu
Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks.
1 code implementation • 8 Mar 2023 • Zhongyi Jiang, Min Zhu, Dongzhuo Li, Qiuzi Li, Yanhua O. Yuan, Lu Lu
Here, we develop a Fourier-enhanced multiple-input neural operator (Fourier-MIONet) to learn the solution operator of the problem of multiphase flow in porous media.
1 code implementation • 26 May 2023 • Min Zhu, Shihang Feng, Youzuo Lin, Lu Lu
Here, we develop a Fourier-enhanced deep operator network (Fourier-DeepONet) for FWI with the generalization of seismic sources, including the frequencies and locations of sources.
1 code implementation • 20 Oct 2023 • Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
Hearing is arguably an essential ability of artificial intelligence (AI) agents in the physical world, which refers to the perception and understanding of general auditory information consisting of at least three types of sounds: speech, audio events, and music.
4 code implementations • 8 Oct 2019 • Lu Lu, Pengzhan Jin, George Em. Karniadakis
This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data.
1 code implementation • 15 Jun 2023 • Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu
In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry.
1 code implementation • 9 Jul 2019 • Qilei Li, Zhen Li, Lu Lu, Gwanggil Jeon, Kai Liu, Xiaomin Yang
The rapid development of deep learning (DL) has driven single image super-resolution (SR) into a new era.
Ranked #18 on Image Super-Resolution on BSD100 - 4x upscaling
1 code implementation • 26 Apr 2023 • Bing Wang, Xinnian Liang, Jian Yang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information.
2 code implementations • 3 Feb 2022 • Mitchell Daneker, Zhen Zhang, George Em Karniadakis, Lu Lu
The dynamics of systems biological processes are usually modeled by a system of ordinary differential equations (ODEs) with many unknown parameters that need to be inferred from noisy and sparse measurements.
1 code implementation • 9 May 2022 • Xin-Yang Liu, Min Zhu, Lu Lu, Hao Sun, Jian-Xun Wang
Traditional data-driven deep learning models often struggle with high training costs, error accumulation, and poor generalizability in complex physical processes.
1 code implementation • 8 Aug 2023 • Honghui Wang, Lu Lu, Shiji Song, Gao Huang
To avoid the inefficient manual selection and to alleviate the optimization difficulty of PINNs, we introduce adaptive activation functions to search for the optimal function when solving different problems.
1 code implementation • 14 Jun 2023 • Xinghua Qu, Hongyang Liu, Zhu Sun, Xiang Yin, Yew Soon Ong, Lu Lu, Zejun Ma
Conversational recommender systems (CRSs) have become crucial emerging research topics in the field of RSs, thanks to their natural advantages of explicitly acquiring user preferences via interactive conversations and revealing the reasons behind recommendations.
2 code implementations • 9 Oct 2023 • Guangzhi Sun, Wenyi Yu, Changli Tang, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
Audio-visual large language models (LLM) have drawn significant attention, yet the fine-grained combination of both input streams is rather under-explored, which is challenging but necessary for LLMs to understand general video inputs.
1 code implementation • 18 May 2023 • Shunyuan Mao, Ruobing Dong, Lu Lu, Kwang Moo Yi, Sifan Wang, Paris Perdikaris
We develop a tool, which we name Protoplanetary Disk Operator Network (PPDONet), that can predict the solution of disk-planet interactions in protoplanetary disks in real-time.
1 code implementation • 27 May 2019 • Pengzhan Jin, Lu Lu, Yifa Tang, George Em. Karniadakis
To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness.
1 code implementation • ICLR 2019 • Lu Lu, Yanhui Su, George Em. Karniadakis
However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability.
no code implementations • 21 Sep 2018 • Dongkun Zhang, Lu Lu, Ling Guo, George Em. Karniadakis
Here, we propose a new method with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i. e., the parametric uncertainty and the approximation uncertainty.
no code implementations • 16 Jan 2019 • Christine M. Anderson-Cook, Kary L. Myers, Lu Lu, Michael L. Fugate, Kevin R. Quinlan, Norma Pawley
It also describes a post-competition analysis that enables robust and efficient assessment of the strengths and weaknesses of solutions from different competitors, as well as greater understanding of the regions of the input space that are well-solved.
no code implementations • 15 Mar 2019 • Lu Lu, Yeonjong Shin, Yanhui Su, George Em. Karniadakis
Numerical examples are provided to demonstrate the effectiveness of the new initialization procedure.
1 code implementation • 27 Oct 2019 • Andreagiovanni Reina, Viktor Ioannou, Junjin Chen, Lu Lu, Charles Kent, James A. R. Marshall
Will the Third World War be fought by robots?
no code implementations • 26 Aug 2020 • Ping Liu, Yuewei Lin, Zibo Meng, Lu Lu, Weihong Deng, Joey Tianyi Zhou, Yi Yang
In this paper, we propose a simple yet effective approach, named Point Adversarial Self Mining (PASM), to improve the recognition accuracy in facial expression recognition.
no code implementations • 23 Dec 2020 • Chensen Lin, Zhen Li, Lu Lu, Shengze Cai, Martin Maxey, George Em Karniadakis
Simulating and predicting multiscale problems that couple multiple physics and dynamics across many orders of spatiotemporal scales is a great challenge that has not been investigated systematically by deep neural networks (DNNs).
Computational Physics
no code implementations • 24 Dec 2020 • Lu Lu, ZhenZhen Lou
The classical problem of characterizing the graphs with bounded eigenvalues may date back to the work of Smith in 1970.
Combinatorics 05C50
no code implementations • 6 Apr 2021 • Anran Jiao, Haiyang He, Rishikesh Ranade, Jay Pathak, Lu Lu
Discovering governing equations of a physical system, represented by partial differential equations (PDEs), from data is a central challenge in a variety of areas of science and engineering.
no code implementations • 14 Aug 2021 • Gang Guo, Yi Yu, Rodrigo C. de Lamare, Zongsheng Zheng, Lu Lu, Qiangming Cai
In addition, an adaptive approach for the choice of the thresholding parameter in the proximal step is also proposed based on the minimization of the mean square deviation.
no code implementations • 1 Oct 2021 • Lu Lu, Kai-Li Yin, Rodrigo C. de Lamare, Zongsheng Zheng, Yi Yu, Xiaomin Yang, Badong Chen
Active noise control (ANC) is an effective way for reducing the noise level in electroacoustic or electromechanical systems.
no code implementations • 19 Oct 2021 • Lu Lu, Kai-Li Yin, Rodrigo C. de Lamare, Zongsheng Zheng, Yi Yu, Xiaomin Yang, Badong Chen
Most of the literature focuses on the development of the linear active noise control (ANC) techniques.
no code implementations • 19 Mar 2022 • Lu Lu, Yi Yu, Rodrigo C. de Lamare, Xiaomin Yang
We propose a novel M-estimate conjugate gradient (CG) algorithm, termed Tukey's biweight M-estimate CG (TbMCG), for system identification in impulsive noise environments.
no code implementations • 19 Apr 2022 • Christopher Hazard, Akshay Bhagat, Balarama Raju Buddharaju, Zhongtao Liu, Yunming Shao, Lu Lu, Sammy Omari, Henggang Cui
Trajectory prediction is an important task in autonomous driving.
no code implementations • CSRNLP (LREC) 2022 • Lu Lu, Jinghang Gu, Chu-Ren Huang
Inclusion, as one of the foundations in the diversity, equity, and inclusion initiative, concerns the degree of being treated as an ingroup member in a workplace.
no code implementations • 28 Oct 2022 • Yist Y. Lin, Tao Han, HaiHua Xu, Van Tung Pham, Yerbolat Khassanov, Tze Yuang Chong, Yi He, Lu Lu, Zejun Ma
One of limitations in end-to-end automatic speech recognition (ASR) framework is its performance would be compromised if train-test utterance lengths are mismatched.
no code implementations • 29 Mar 2023 • Lu Lu, Yi Yu, Zongsheng Zheng, Guangya Zhu, Xiaomin Yang
Two Andrew's sine estimator (ASE)-based robust adaptive filtering algorithms are proposed in this brief.
no code implementations • 12 May 2023 • Jie Xu, Lu Lu, Sen yang, Bilin Liang, Xinwei Peng, Jiali Pang, Jinru Ding, Xiaoming Shi, Lingrui Yang, Huan Song, Kang Li, Xin Sun, Shaoting Zhang
The responses generated by chatbots based on LLMs are recorded for blind evaluations by five licensed medical experts.
no code implementations • 5 May 2023 • Benjamin Fan, Edward Qiao, Anran Jiao, Zhouzhou Gu, Wenhao Li, Lu Lu
We develop a methodology that utilizes deep learning to simultaneously solve and estimate canonical continuous-time general equilibrium models in financial economics.
no code implementations • 27 May 2023 • Linhao Dong, Zhecheng An, Peihao Wu, Jun Zhang, Lu Lu, Zejun Ma
We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU, including the dominant speech representation learned from self-supervised pre-training.
no code implementations • 5 Jun 2023 • Qianqian Dong, Zhiying Huang, Qiao Tian, Chen Xu, Tom Ko, Yunlong Zhao, Siyuan Feng, Tang Li, Kexin Wang, Xuxin Cheng, Fengpeng Yue, Ye Bai, Xi Chen, Lu Lu, Zejun Ma, Yuping Wang, Mingxuan Wang, Yuxuan Wang
For the speech synthesis part, we adopt the existing VALL-E X approach and build a unit-based audio language model.
no code implementations • 7 Jun 2023 • Lu Huang, Boyu Li, Jun Zhang, Lu Lu, Zejun Ma
Domain adaptation using text-only corpus is challenging in end-to-end(E2E) speech recognition.
no code implementations • 15 Aug 2023 • Xiaoming Shi, Jie Xu, Jinru Ding, Jiali Pang, Sichen Liu, Shuqing Luo, Xingwei Peng, Lu Lu, Haihong Yang, Mingtao Hu, Tong Ruan, Shaoting Zhang
Despite their alluring technological potential, there is no unified and comprehensive evaluation criterion, leading to the inability to evaluate the quality and potential risks of medical LLMs, further hindering the application of LLMs in medical treatment scenarios.
no code implementations • 25 Sep 2023 • Wenyi Yu, Changli Tang, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
Q-Former-based LLMs can generalise well to out-of-domain datasets, where 12% relative WER reductions over the Whisper baseline ASR model were achieved on the Eval2000 test set without using any in-domain training data from Switchboard.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 29 Oct 2023 • Zecheng Zhang, Christian Moya, Lu Lu, Guang Lin, Hayden Schaeffer
Neural operators have been applied in various scientific fields, such as solving parametric partial differential equations, dynamical systems with control, and inverse problems.
no code implementations • 30 Oct 2023 • Huiyao Shu, Ang Wang, Ziji Shi, Hanyu Zhao, Yong Li, Lu Lu
However, a memory-efficient execution plan that includes a reasonable operator execution order and tensor memory layout can significantly increase the models' memory efficiency and reduce overheads from high-level techniques.
no code implementations • 15 Nov 2023 • Jin Qiu, Lu Huang, Boyu Li, Jun Zhang, Lu Lu, Zejun Ma
Deep biasing for the Transducer can improve the recognition performance of rare words or contextual entities, which is essential in practical applications, especially for streaming Automatic Speech Recognition (ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 30 Nov 2023 • Simin Zheng, Lu Lu, Yili Hong, Jian Liu
This paper aims to fill in this gap by developing statistical methods for planning AV reliability assurance tests based on recurrent events data.
no code implementations • 5 Jan 2024 • Dongdi Zhao, Jianbo Ma, Lu Lu, Jinke Li, Xuan Ji, Lei Zhu, Fuming Fang, Ming Liu, Feijun Jiang
Far-field speech recognition is a challenging task that conventionally uses signal processing beamforming to attack noise and interference problem.
no code implementations • 22 Jan 2024 • Xianghu Yue, Xiaohai Tian, Lu Lu, Malu Zhang, Zhizheng Wu, Haizhou Li
To bridge the gap between modalities, CoAVT employs a query encoder, which contains a set of learnable query embeddings, and extracts the most informative audiovisual features of the corresponding text.
no code implementations • 30 Jan 2024 • Joel Hayford, Jacob Goldman-Wetzler, Eric Wang, Lu Lu
Scientific machine learning (SciML) has emerged as a versatile approach to address complex computational science and engineering problems.
no code implementations • 2 Feb 2024 • Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, Madeleine Udell
This paper explores challenges in training Physics-Informed Neural Networks (PINNs), emphasizing the role of the loss landscape in the training process.
no code implementations • 11 Feb 2024 • Minglang Yin, Nicolas Charon, Ryan Brody, Lu Lu, Natalia Trayanova, Mauro Maggioni
DIMON is based on transporting a given problem (initial/boundary conditions and domain $\Omega_{\theta}$) to a problem on a reference domain $\Omega_{0}$, where training data from multiple problems is used to learn the map to the solution on $\Omega_{0}$, which is then re-mapped to the original domain $\Omega_{\theta}$.
no code implementations • 23 Feb 2024 • Christian Moya, Amirhossein Mollaali, Zecheng Zhang, Lu Lu, Guang Lin
In this paper, we adopt conformal prediction, a distribution-free uncertainty quantification (UQ) framework, to obtain confidence prediction intervals with coverage guarantees for Deep Operator Network (DeepONet) regression.