no code implementations • 11 Nov 2024 • Zecheng Zhang, Christian Moya, Lu Lu, Guang Lin, Hayden Schaeffer
We propose a novel fine-tuning method to achieve multi-operator learning through training a distributed neural operator with diverse function data and then zero-shot fine-tuning the neural network using physics-informed losses for downstream tasks.
no code implementations • 10 Nov 2024 • Jiahao Zhang, Christian Moya, Guang Lin
Optimizing the learning rate remains a critical challenge in machine learning, essential for achieving model stability and efficient convergence.
1 code implementation • 4 Nov 2024 • Haoyang Zheng, Guang Lin
It also effectively handles unbounded growth functions and accumulated numerical errors in the Laplace domain, thereby overcoming challenges in the identification process.
no code implementations • 31 Oct 2024 • Amirhossein Mollaali, Gabriel Zufferey, Gonzalo Constante-Flores, Christian Moya, Can Li, Guang Lin, Meng Yue
This paper proposes a new data-driven methodology for predicting intervals of post-fault voltage trajectories in power systems.
no code implementations • 20 Oct 2024 • Gavin Ruan, Ziqi Guo, Guang Lin
In this work, we introduce a novel two-level optimization framework, which utilizes the K-Medoids clustering algorithm in conjunction with the Open-Source Routing Machine engine, to optimize food bank and pantry locations based on real road distances to houses and house blocks.
no code implementations • 9 Oct 2024 • Rajdeep Haldar, Yue Xing, Qifan Song, Guang Lin
We argue that clean training experiences poor convergence in the off-manifold direction caused by the ill-conditioning in widely used first-order optimizers like gradient descent.
no code implementations • 9 Jul 2024 • Jiajun Liang, Qian Zhang, Wei Deng, Qifan Song, Guang Lin
This work introduces a novel and efficient Bayesian federated learning algorithm, namely, the Federated Averaging stochastic Hamiltonian Monte Carlo (FA-HMC), for parameter estimation and uncertainty quantification.
no code implementations • 24 May 2024 • Guang Lin, Qibin Zhao
In this paper, we introduce a novel defense technique named Large LAnguage MOdel Sentinel (LLAMOS), which is designed to enhance the adversarial robustness of LLMs by purifying the adversarial textual examples before feeding them into the target LLM.
1 code implementation • 13 May 2024 • Haoyang Zheng, Hengrong Du, Qi Feng, Wei Deng, Guang Lin
Replica exchange stochastic gradient Langevin dynamics (reSGLD) is an effective sampler for non-convex learning in large-scale datasets.
no code implementations • 24 Mar 2024 • Guang Lin, Zerui Tao, Jianhai Zhang, Toshihisa Tanaka, Qibin Zhao
We propose a novel robust reverse process with adversarial guidance, which is independent of given pre-trained DMs and avoids retraining or fine-tuning the DMs.
no code implementations • 23 Feb 2024 • Christian Moya, Amirhossein Mollaali, Zecheng Zhang, Lu Lu, Guang Lin
In this paper, we adopt conformal prediction, a distribution-free uncertainty quantification (UQ) framework, to obtain confidence prediction intervals with coverage guarantees for Deep Operator Network (DeepONet) regression.
1 code implementation • 29 Jan 2024 • Guang Lin, Chao Li, Jianhai Zhang, Toshihisa Tanaka, Qibin Zhao
The deep neural networks are known to be vulnerable to well-designed adversarial attacks.
1 code implementation • 22 Jan 2024 • Haoyang Zheng, Wei Deng, Christian Moya, Guang Lin
Approximate Thompson sampling with Langevin Monte Carlo broadens its reach from Gaussian posterior sampling to encompass more general smooth posteriors.
no code implementations • 14 Dec 2023 • YiKai Liu, Tushar K. Ghosh, Guang Lin, Ming Chen
Biased enhanced sampling methods utilizing collective variables (CVs) are powerful tools for sampling conformational ensembles.
no code implementations • 28 Nov 2023 • Zhihao Kong, Amirhossein Mollaali, Christian Moya, Na Lu, Guang Lin
This work redesigns MIONet, integrating Long Short Term Memory (LSTM) to learn neural operators from time-dependent data.
1 code implementation • 10 Nov 2023 • Jinwon Sohn, Qifan Song, Guang Lin
As the data-driven decision process becomes dominating for industrial applications, fairness-aware machine learning arouses great attention in various areas.
no code implementations • 7 Nov 2023 • Amirhossein Mollaali, Izzet Sahin, Iqrar Raza, Christian Moya, Guillermo Paniagua, Guang Lin
To address this challenge, this paper proposes a deep operator learning-based framework that requires a limited high-fidelity dataset for training.
no code implementations • 29 Oct 2023 • Zecheng Zhang, Christian Moya, Lu Lu, Guang Lin, Hayden Schaeffer
Neural operators have been applied in various scientific fields, such as solving parametric partial differential equations, dynamical systems with control, and inverse problems.
no code implementations • 3 Oct 2023 • YiKai Liu, Ming Chen, Guang Lin
Despite recent progress in data-driven backmapping approaches, devising a backmapping method that can be universally applied across various CG models and proteins remains unresolved.
no code implementations • 7 Sep 2023 • Shiheng Zhang, Jiahao Zhang, Jie Shen, Guang Lin
We present a novel optimization algorithm, element-wise relaxed scalar auxiliary variable (E-RSAV), that satisfies an unconditional energy dissipation law and exhibits improved alignment between the modified and the original energy.
no code implementations • 9 Jun 2023 • Jiahao Zhang, Shiheng Zhang, Jie Shen, Guang Lin
For an objective operator G, the Branch net encodes different input functions u at the same number of sensors, and the Trunk net evaluates the output function at any location.
1 code implementation • 6 Apr 2023 • Haoyang Zheng, Yao Huang, Ziyang Huang, Wenrui Hao, Guang Lin
Due to the complex behavior arising from non-uniqueness, symmetry, and bifurcations in the solution space, solving inverse problems of nonlinear differential equations (DEs) with multiple solutions is a challenging task.
no code implementations • 3 Mar 2023 • Binghang Lu, Christian B. Moya, Guang Lin
This paper presents NSGA-PINN, a multi-objective optimization framework for effective training of Physics-Informed Neural Networks (PINNs).
no code implementations • 29 Jan 2023 • Christian Moya, Guang Lin, Tianqiao Zhao, Meng Yue
This paper designs an Operator Learning framework to approximate the dynamic response of synchronous generators.
no code implementations • 20 Nov 2022 • Wei Deng, Qian Zhang, Qi Feng, Faming Liang, Guang Lin
Notably, in big data scenarios, we obtain an appealing communication cost $O(P\log P)$ based on the optimal window size.
no code implementations • 21 Sep 2022 • Yixuan Sun, Christian Moya, Guang Lin, Meng Yue
This paper develops a Deep Graph Operator Network (DeepGraphONet) framework that learns to approximate the dynamics of a complex system (e. g. the power grid or traffic) with an underlying sub-graph structure.
1 code implementation • 1 Sep 2022 • Yan Xiang, Yu-Hang Tang, Zheng Gong, Hongyi Liu, Liang Wu, Guang Lin, Huai Sun
We introduce an explorative active learning (AL) algorithm based on Gaussian process regression and marginalized graph kernel (GPR-MGK) to explore chemical space with minimum cost.
no code implementations • 24 Jul 2022 • Yating Wang, Wing Tat Leung, Guang Lin
In this work, we propose an adaptive sparse learning algorithm that can be applied to learn the physical processes and obtain a sparse representation of the solution given a large snapshot space.
no code implementations • 30 May 2022 • Wenjie Li, Qifan Song, Jean Honorio, Guang Lin
This work establishes the first framework of federated $\mathcal{X}$-armed bandit, where different clients face heterogeneous local objective functions defined on the same domain and are required to collaboratively figure out the global optimum.
no code implementations • 10 May 2022 • Sheng Zhang, Guang Lin, Samy Tindel
We introduce a proper notion of 2-dimensional signature for images.
no code implementations • 11 Apr 2022 • Jiahao Zhang, Shiqi Zhang, Guang Lin
This paper proposes a new dimension reduction framework based on rotated multi-fidelity Gaussian process regression and a Bayesian active learning scheme when the available precise observations are insufficient.
no code implementations • 7 Apr 2022 • Jiahao Zhang, Shiqi Zhang, Guang Lin
We propose a new multi-resolution autoencoder DeepONet model referred to as MultiAuto-DeepONet to deal with this difficulty with the aid of convolutional autoencoder.
no code implementations • 6 Apr 2022 • Jiahao Zhang, Shiqi Zhang, Guang Lin
We introduce three different models: continuous time, discrete time and hybrid models.
no code implementations • 27 Feb 2022 • Chi-Hua Wang, Wenjie Li, Guang Cheng, Guang Lin
This paper presents a novel federated linear contextual bandits model, where individual clients face different K-armed stochastic bandits with high-dimensional decision context and coupled through common global parameters.
1 code implementation • ICLR 2022 • Wei Deng, Siqi Liang, Botao Hao, Guang Lin, Faming Liang
We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions.
no code implementations • 15 Feb 2022 • Christian Moya, Shiqi Zhang, Meng Yue, Guang Lin
This paper proposes a new data-driven method for the reliable prediction of power system post-fault trajectories.
no code implementations • 22 Jan 2022 • Yunling Zheng, Carson Hu, Guang Lin, Meng Yue, Bao Wang, Jack Xin
Due to the sparsified queries, GLassoformer is more computationally efficient than the standard transformers.
no code implementations • 9 Dec 2021 • Wei Deng, Qian Zhang, Yi-An Ma, Zhao Song, Guang Lin
We develop theoretical guarantees for FA-LD for strongly log-concave distributions with non-i. i. d data and study how the injected noise and the stochastic-gradient noise, the heterogeneity of data, and the varying learning rates affect the convergence.
no code implementations • 22 Nov 2021 • Liyao Gao, Guang Lin, Wei Zhu
Incorporating group symmetry directly into the learning process has proved to be an effective guideline for model design.
no code implementations • 3 Nov 2021 • Guang Lin, Christian Moya, Zecheng Zhang
To enable DeepONets training with noisy data, we propose using the Bayesian framework of replica-exchange Langevin diffusion.
no code implementations • 29 Sep 2021 • Wei Deng, Qian Zhang, Qi Feng, Faming Liang, Guang Lin
Parallel tempering (PT), also known as replica exchange, is the go-to workhorse for simulations of multi-modal distributions.
no code implementations • 18 Sep 2021 • Haoyang Zheng, Ziyang Huang, Guang Lin
To predict the order parameters, which locate individual phases in the future time, a neural network (NN) is applied to quickly infer the dynamics of the phases by encoding observations.
no code implementations • 9 Sep 2021 • Christian Moya, Guang Lin
Deep learning-based surrogate modeling is becoming a promising approach for learning and simulating dynamical systems.
no code implementations • 2 Feb 2021 • Aoxue Chen, Yifan Du, Liyao Mars Gao, Guang Lin
In this work, we propose an advanced Bayesian sparse learning algorithm for PDE discovery with variable coefficients, predominantly when the coefficients are spatially or temporally dependent.
no code implementations • 12 Jan 2021 • Ziyang Huang, Guang Lin, Arezoo M. Ardekani
Numerical tests indicate that the proposed model and scheme are effective and robust to study various challenging multiphase and multicomponent flows.
Computational Physics Numerical Analysis Numerical Analysis Fluid Dynamics
no code implementations • 3 Nov 2020 • Guang Lin, Jianhai Zhang, Yuxi Liu, Tianyang Gao, Wanzeng Kong, Xu Lei, Tao Qiu
Due to its advantages of high temporal and spatial resolution, the technology of simultaneous electroencephalogram-functional magnetic resonance imaging (EEG-fMRI) acquisition and analysis has attracted much attention, and has been widely used in various research fields of brain science.
2 code implementations • NeurIPS 2020 • Wei Deng, Guang Lin, Faming Liang
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called contour stochastic gradient Langevin dynamics (CSGLD), for Bayesian learning in big data statistics.
1 code implementation • 7 Oct 2020 • Yixuan Sun, Imad Hanhan, Michael D. Sangid, Guang Lin
Evaluating the mechanical response of fiber-reinforced composites can be extremely time consuming and expensive.
no code implementations • 4 Oct 2020 • Lang Zhao, Tyler Tallman, Guang Lin
These results are an important first step in translating the combination of self-sensing materials and EIT to real-world SHM and NDE.
no code implementations • 3 Oct 2020 • Yating Wang, Wei Deng, Guang Lin
The bias introduced by stochastic approximation is controllable and can be analyzed theoretically.
1 code implementation • ICLR 2021 • Wei Deng, Qi Feng, Georgios Karagiannis, Guang Lin, Faming Liang
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration.
no code implementations • 3 Sep 2020 • Sheng Zhang, Xiu Yang, Samy Tindel, Guang Lin
We prove that under certain conditions, the observable and its derivatives of any order are governed by a single Gaussian random field, which is the aforementioned AGRF.
Statistics Theory Probability Statistics Theory
2 code implementations • ICML 2020 • Wei Deng, Qi Feng, Liyao Gao, Faming Liang, Guang Lin
Replica exchange Monte Carlo (reMC), also known as parallel tempering, is an important technique for accelerating the convergence of the conventional Markov Chain Monte Carlo (MCMC) algorithms.
Ranked #75 on Image Classification on CIFAR-100 (using extra training data)
no code implementations • 3 Aug 2020 • Yixiang Deng, Guang Lin, Xiu Yang
We compare this method with the conventional multi-fidelity Cokriging method that does not use gradients information, and the result suggests that GE-Cokriging has a better performance in predicting both QoI and its gradients.
no code implementations • 23 May 2020 • Moonseop Kim, Guang Lin
Case studies were performed to classify the images using CNNs and determine the PMB, LBS, and VES models' suitability.
no code implementations • 11 May 2020 • Moonseop Kim, Huayi Yin, Guang Lin
In material modeling, the calculation speed using the empirical potentials is fast compared to the first principle calculations, but the results are not as accurate as of the first principle calculations.
no code implementations • 28 Apr 2020 • Liyao Gao, Yifan Du, Hongshan Li, Guang Lin
Rotation symmetry is a general property for most symmetric fluid systems.
2 code implementations • 17 Feb 2020 • Wei Deng, Junwei Pan, Tian Zhou, Deguang Kong, Aaron Flores, Guang Lin
To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals.
Ranked #8 on Click-Through Rate Prediction on Avazu
1 code implementation • NeurIPS 2019 • Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
We propose a novel adaptive empirical Bayesian method for sparse deep learning, where the sparsity is ensured via a class of self-adaptive spike-and-slab priors.
1 code implementation • 22 Jul 2019 • Yating Wang, Guang Lin
In particular, for the flow problem, we design a network with convolutional and locally connected layers to perform model reductions.
Numerical Analysis Numerical Analysis
no code implementations • 17 Jul 2019 • Sheng Zhang, Guang Lin
We demonstrate how to use our algorithm step by step and compare our algorithm with threshold sparse Bayesian regression (TSBR) for the discovery of differential equations.
no code implementations • 19 Jun 2019 • Xin Cai, Guang Lin, Jinglai Li
We consider supervised dimension reduction problems, namely to identify a low dimensional projection of the predictors $\-x$ which can retain the statistical relationship between $\-x$ and the response variable $y$.
no code implementations • ICLR 2019 • Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.
no code implementations • 12 Jul 2018 • Sangpil Kim, Nick Winovich, Guang Lin, Karthik Ramani
We propose a fully-convolutional conditional generative model, the latent transformation neural network (LTNN), capable of view synthesis using a light-weight neural network suited for real-time applications.