no code implementations • ICML 2020 • Zheng Wang, Xinqi Chu, Shandian Zhe
Tensor factorization is a fundamental framework to analyze high-order interactions in data.
no code implementations • 14 Mar 2025 • Da Long, Shandian Zhe, Samuel Williams, Leonid Oliker, Zhe Bai
Simulating the long-term dynamics of multi-scale and multi-physics systems poses a significant challenge in understanding complex phenomena across science and engineering.
no code implementations • 4 Feb 2025 • Keyan Chen, Yile Li, Da Long, Zhitong Xu, Wei Xing, Jacob Hochhalter, Shandian Zhe
Neural operators have shown great potential in surrogate modeling.
no code implementations • 17 Oct 2024 • Da Long, Zhitong Xu, Guang Yang, Akil Narayan, Shandian Zhe
ACMFD can perform a wide range of tasks within a single framework, including forward prediction, various inverse problems, and simulating data for entire systems or subsets of quantities conditioned on others.
no code implementations • 15 Oct 2024 • Zhitong Xu, Da Long, Yiming Xu, Guang Yang, Shandian Zhe, Houman Owhadi
In numerical experiments, we demonstrate the advantages of our method in solving several benchmark PDEs.
no code implementations • 4 Oct 2024 • Madison Cooley, Varun Shankar, Robert M. Kirby, Shandian Zhe
We find that strong BC PINNs can better learn the amplitudes of high-frequency components of the target solutions.
no code implementations • 4 Oct 2024 • Madison Cooley, Robert M. Kirby, Shandian Zhe, Varun Shankar
We present a new class of PINNs called HyResPINNs, which augment traditional PINNs with adaptive hybrid residual blocks that combine the outputs of a standard neural network and a radial basis function (RBF) network.
no code implementations • 30 Jun 2024 • Matthew Lowery, John Turnage, Zachary Morrow, John D. Jakeman, Akil Narayan, Shandian Zhe, Varun Shankar
This paper introduces the Kernel Neural Operator (KNO), a novel operator learning technique that uses deep kernel-based integral operators in conjunction with quadrature for function-space approximation of operators (maps from functions to functions).
no code implementations • 10 Jun 2024 • Zachary Bastiani, Robert M. Kirby, Jacob Hochhalter, Shandian Zhe
This paper proposes a novel deep symbolic regression approach to enhance the robustness and interpretability of data-driven mathematical expression discovery.
no code implementations • 4 Jun 2024 • Madison Cooley, Shandian Zhe, Robert M. Kirby, Varun Shankar
We present polynomial-augmented neural networks (PANNs), a novel machine learning architecture that combines deep neural networks (DNNs) with a polynomial approximant.
no code implementations • 23 May 2024 • Yutao Feng, Yintong Shang, Xiang Feng, Lei Lan, Shandian Zhe, Tianjia Shao, Hongzhi Wu, Kun Zhou, Hao Su, Chenfanfu Jiang, Yin Yang
We present ElastoGen, a knowledge-driven AI model that generates physically accurate 4D elastodynamics.
no code implementations • 18 Feb 2024 • Da Long, Shandian Zhe
In this paper, we propose an invertible Fourier Neural Operator (iFNO) that tackles both the forward and inverse problems.
1 code implementation • 5 Feb 2024 • Zhitong Xu, Haitao Wang, Jeff M Phillips, Shandian Zhe
Second, our theoretical analysis reveals that the SE kernels failure primarily stems from improper initialization of the length-scale parameters, which are commonly used in practice but can cause gradient vanishing in training.
no code implementations • 9 Nov 2023 • Zheng Wang, Shibo Li, Shikai Fang, Shandian Zhe
We propose a conditional score model to control the solution generation by the input parameters and the fidelity.
1 code implementation • 8 Nov 2023 • Shikai Fang, Xin Yu, Zheng Wang, Shibo Li, Mike Kirby, Shandian Zhe
To generalize Tucker decomposition to such scenarios, we propose Functional Bayesian Tucker Decomposition (FunBaT).
1 code implementation • 8 Nov 2023 • Shikai Fang, Madison Cooley, Da Long, Shibo Li, Robert Kirby, Shandian Zhe
Machine learning based solvers have garnered much attention in physical simulation and scientific computing, with a prominent example, physics-informed neural networks (PINNs).
1 code implementation • 9 Oct 2023 • Da Long, Wei W. Xing, Aditi S. Krishnapriyan, Robert M. Kirby, Shandian Zhe, Michael W. Mahoney
To overcome the computational challenge of kernel regression, we place the function values on a mesh and induce a Kronecker product construction, and we use tensor algebra to enable efficient computation and optimization.
1 code implementation • 29 Sep 2023 • Shibo Li, Xin Yu, Wei Xing, Mike Kirby, Akil Narayan, Shandian Zhe
To overcome this problem, we propose Multi-Resolution Active learning of FNO (MRA-FNO), which can dynamically select the input functions and resolutions to lower the data cost as much as possible while optimizing the learning efficiency.
1 code implementation • 28 Aug 2023 • Shikai Fang, Qingsong Wen, Yingtao Luo, Shandian Zhe, Liang Sun
More importantly, almost all methods assume the observations are sampled at regular time stamps, and fail to handle complex irregular sampled time series arising from different applications.
1 code implementation • 12 May 2023 • Yu Chen, Wei Deng, Shikai Fang, Fengpei Li, Nicole Tianjiao Yang, Yikai Zhang, Kashif Rasul, Shandian Zhe, Anderson Schneider, Yuriy Nevmyvaka
We show that optimizing the transport cost improves the performance and the proposed algorithm achieves the state-of-the-art result in healthcare and environmental data while exhibiting the advantage of exploring both temporal and feature patterns in probabilistic time series imputation.
1 code implementation • 28 Feb 2023 • Michael Penwarden, Ameya D. Jagtap, Shandian Zhe, George Em Karniadakis, Robert M. Kirby
This problem is also found in, and in some sense more difficult, with domain decomposition strategies such as temporal decomposition using XPINNs.
no code implementations • 7 Feb 2023 • Hongsup Oh, Roman Amici, Geoffrey Bomarito, Shandian Zhe, Robert Kirby, Jacob Hochhalter
In this paper, we present a machine learning method for the discovery of analytic solutions to differential equations.
no code implementations • 19 Jan 2023 • Junyang Cai, Khai-Nguyen Nguyen, Nishant Shrestha, Aidan Good, Ruisen Tu, Xin Yu, Shandian Zhe, Thiago Serra
One surprising trait of neural networks is the extent to which their connections can be pruned with little to no effect on accuracy.
no code implementations • 23 Oct 2022 • Shibo Li, Jeff M. Phillips, Xin Yu, Robert M. Kirby, Shandian Zhe
However, this method only queries at one pair of fidelity and input at a time, and hence has a risk to bring in strongly correlated examples to reduce the learning efficiency.
no code implementations • 23 Oct 2022 • Shibo Li, Michael Penwarden, Yiming Xu, Conor Tillinghast, Akil Narayan, Robert M. Kirby, Shandian Zhe
However, the performance of multi-domain PINNs is sensitive to the choice of the interface conditions.
no code implementations • 14 Oct 2022 • Da Long, Nicole Mrvaljevic, Shandian Zhe, Bamdad Hosseini
This article presents a three-step framework for learning and solving partial differential equations (PDEs) using kernel methods.
no code implementations • 8 Jul 2022 • Zheng Wang, Yiming Xu, Conor Tillinghast, Shibo Li, Akil Narayan, Shandian Zhe
High-order interaction events are common in real-world applications.
1 code implementation • 6 Jul 2022 • Zheng Wang, Shandian Zhe
In practice, tensor data is often accompanied by temporal information, namely the time points when the entry values were generated.
no code implementations • 1 Jul 2022 • Shibo Li, Zheng Wang, Robert M. Kirby, Shandian Zhe
Our model can interpolate and/or extrapolate the predictions to novel fidelities, which can be even higher than the fidelities of training data.
no code implementations • 7 Jun 2022 • Aidan Good, Jiaqi Lin, Hannah Sieg, Mikey Ferguson, Xin Yu, Shandian Zhe, Jerzy Wieczorek, Thiago Serra
In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model.
1 code implementation • 9 Mar 2022 • Xin Yu, Thiago Serra, Srikumar Ramalingam, Shandian Zhe
We propose a tractable heuristic for solving the combinatorial extension of OBS, in which we select weights for simultaneous removal, as well as a systematic update of the remaining weights.
1 code implementation • 24 Feb 2022 • Da Long, Zheng Wang, Aditi Krishnapriyan, Robert Kirby, Shandian Zhe, Michael Mahoney
Physical modeling is critical for many modern science and engineering applications.
no code implementations • NeurIPS 2021 • Zhimeng Pan, Zheng Wang, Jeff M. Phillips, Shandian Zhe
Specifically, we use an embedding to represent each event type and model the event influence as an unknown function of the embeddings and time span.
no code implementations • 26 Oct 2021 • Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby
Physics-informed neural networks (PINNs) as a means of discretizing partial differential equations (PDEs) are garnering much attention in the Computational Science and Engineering (CS&E) world.
BIG-bench Machine Learning
Physics-informed machine learning
+1
no code implementations • 19 Oct 2021 • Conor Tillinghast, Zheng Wang, Shandian Zhe
Compared with the existent works, our model not only leverages the structural information underlying the observed entry indices, but also provides extra interpretability and flexibility -- it can simultaneously estimate a set of location factors about the intrinsic properties of the tensor nodes, and another set of sociability factors reflecting their extrovert activity in interacting with others; users are free to choose a trade-off between the two types of factors.
no code implementations • 16 Oct 2021 • Shibo Li, Zheng Wang, Akil Narayan, Robert Kirby, Shandian Zhe
the initialization, we only need to run the standard ODE solver twice -- one is forward in time that evolves a long trajectory of gradient flow for the sampled task; the other is backward and solves the adjoint ODE.
1 code implementation • NeurIPS 2021 • Aditi S. Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M. Kirby, Michael W. Mahoney
We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned.
no code implementations • 25 Jun 2021 • Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby
Candidates for this approach are simulation methodologies for which there are fidelity differences connected with significant computational cost differences.
no code implementations • NeurIPS 2021 • Shibo Li, Robert M. Kirby, Shandian Zhe
Bayesian optimization (BO) is a powerful approach for optimizing black-box, expensive-to-evaluate functions.
no code implementations • 2 Dec 2020 • Shibo Li, Robert M. Kirby, Shandian Zhe
The training examples can be collected with different fidelities to allow a cost/accuracy trade-off.
no code implementations • 10 Oct 2020 • Jinmian Ye, Guangxi Li, Di Chen, Haiqin Yang, Shandian Zhe, Zenglin Xu
Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e. g., image classification, natural language processing, etc.
1 code implementation • 14 Jul 2020 • Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, Shandian Zhe
Our algorithm provides responsive incremental updates for the posterior of the latent factors and NN weights upon receiving new tensor entries, and meanwhile select and inhibit redundant/useless weights.
no code implementations • NeurIPS 2020 • Shibo Li, Wei Xing, Mike Kirby, Shandian Zhe
In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.
no code implementations • 8 Jun 2020 • Zheng Wang, Wei Xing, Robert Kirby, Shandian Zhe
Deep kernel learning is a promising combination of deep neural networks and nonparametric function learning.
1 code implementation • 8 Jun 2020 • Zheng Wang, Wei Xing, Robert Kirby, Shandian Zhe
To address these issues, we propose Multi-Fidelity High-Order Gaussian Process (MFHoGP) that can capture complex correlations both between the outputs and between the fidelities to enhance solution estimation, and scale to large numbers of outputs.
2 code implementations • 25 Mar 2020 • Shibo Li, Wei Xing, Mike Kirby, Shandian Zhe
Gaussian process regression networks (GPRN) are powerful Bayesian models for multi-output regression, but their inference is intractable.
no code implementations • 6 Feb 2020 • Yun Yuan, Xianfeng Terry Yang, Zhao Zhang, Shandian Zhe
To address this issue, this study presents a new modeling framework, named physics regularized machine learning (PRML), to encode classical traffic flow models (referred as physical models) into the ML architecture and to regularize the ML training process.
1 code implementation • 27 Nov 2019 • Yimin Zheng, Shandian Zhe
Tensor decomposition is an essential tool to analyze high-order interactions in multiway data.
no code implementations • 27 Oct 2019 • Zheng Wang, Shandian Zhe
Expectation propagation (EP) is a powerful approximate inference algorithm.
1 code implementation • 31 Dec 2018 • Yishuai Du, Yimin Zheng, Kuang-Chih Lee, Shandian Zhe
Tensor decomposition is a fundamental tool for multiway data analysis.
no code implementations • NeurIPS 2018 • Shandian Zhe, Yishuai Du
Tensor decompositions are fundamental tools for multiway data analysis.
no code implementations • CVPR 2018 • Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, Zenglin Xu
On three challenging tasks, including Action Recognition in Videos, Image Captioning and Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of both prediction accuracy and convergence rate.
no code implementations • ICML 2017 • Hao Peng, Shandian Zhe, Yuan Qi
Gaussian processes (GPs) are powerful non-parametric function estimators.
no code implementations • NeurIPS 2016 • Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-Chih Lee, Zenglin Xu, Yuan Qi, Zoubin Ghahramani
Tensor factorization is a powerful tool to analyse multi-way data.
no code implementations • 12 Nov 2013 • Shandian Zhe, Yuan Qi, Youngja Park, Ian Molloy, Suresh Chari
To overcome this limitation, we present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor decomposition algorithm on MAPREDUCE.
no code implementations • 26 Apr 2013 • Shandian Zhe, Zenglin Xu, Yuan Qi
To unify these two tasks, we present a new sparse Bayesian approach for joint association study and disease diagnosis.