no code implementations • 18 Feb 2024 • Quanjun Lang, Jianfeng Lu
We introduce a novel approach for learning memory kernels in Generalized Langevin Equations.
no code implementations • 13 Feb 2024 • Quanjun Lang, Xiong Wang, Fei Lu, Mauro Maggioni
Modeling multi-agent systems on networks is a fundamental challenge in a wide variety of disciplines.
no code implementations • 18 May 2023 • Quanjun Lang, Fei Lu
We establish a small noise analysis framework to assess the effects of norms in Tikhonov and RKHS regularizations, in the context of ill-posed linear inverse problems with Gaussian noise.
no code implementations • 29 Dec 2022 • Neil K. Chada, Quanjun Lang, Fei Lu, Xiong Wang
However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator.
no code implementations • 8 Mar 2022 • Fei Lu, Quanjun Lang, Qingci An
We present DARTR: a Data Adaptive RKHS Tikhonov Regularization method for the linear inverse problem of nonparametric learning of function parameters in operators.
no code implementations • 10 Jun 2021 • Quanjun Lang, Fei Lu
This study examines the identifiability of interaction kernels in mean-field equations of interacting particles or agents, an area of growing interest across various scientific and engineering fields.
no code implementations • 29 Oct 2020 • Quanjun Lang, Fei Lu
We introduce a nonparametric algorithm to learn interaction kernels of mean-field equations for 1st-order systems of interacting particles.