no code implementations • 10 Jun 2024 • Zachary Bastiani, Robert M. Kirby, Jacob Hochhalter, Shandian Zhe

This paper proposes a novel deep symbolic regression approach to enhance the robustness and interpretability of data-driven mathematical expression discovery.

no code implementations • 4 Jun 2024 • Madison Cooley, Shandian Zhe, Robert M. Kirby, Varun Shankar

We present polynomial-augmented neural networks (PANNs), a novel machine learning architecture that combines deep neural networks (DNNs) with a polynomial approximant.

no code implementations • 16 Feb 2024 • Michael Penwarden, Houman Owhadi, Robert M. Kirby

This topic encompasses a broad array of methods and models aimed at solving a single or a collection of PDE problems, called multitask learning.

1 code implementation • 9 Oct 2023 • Da Long, Wei W. Xing, Aditi S. Krishnapriyan, Robert M. Kirby, Shandian Zhe, Michael W. Mahoney

To overcome the computational challenge of kernel regression, we place the function values on a mesh and induce a Kronecker product construction, and we use tensor algebra to enable efficient computation and optimization.

1 code implementation • 6 Apr 2023 • Haocheng Dai, Michael Penwarden, Robert M. Kirby, Sarang Joshi

Neural operator learning as a means of mapping between complex function spaces has garnered significant attention in the field of computational science and engineering (CS&E).

1 code implementation • 28 Feb 2023 • Michael Penwarden, Ameya D. Jagtap, Shandian Zhe, George Em Karniadakis, Robert M. Kirby

This problem is also found in, and in some sense more difficult, with domain decomposition strategies such as temporal decomposition using XPINNs.

no code implementations • 2 Feb 2023 • Khemraj Shukla, Vivek Oommen, Ahmad Peyvan, Michael Penwarden, Luis Bravo, Anindya Ghoshal, Robert M. Kirby, George Em Karniadakis

Deep neural operators, such as DeepONets, have changed the paradigm in high-dimensional nonlinear regression from function regression to (differential) operator regression, paving the way for significant changes in computational engineering applications.

no code implementations • 23 Oct 2022 • Shibo Li, Jeff M. Phillips, Xin Yu, Robert M. Kirby, Shandian Zhe

However, this method only queries at one pair of fidelity and input at a time, and hence has a risk to bring in strongly correlated examples to reduce the learning efficiency.

no code implementations • 23 Oct 2022 • Shibo Li, Michael Penwarden, Yiming Xu, Conor Tillinghast, Akil Narayan, Robert M. Kirby, Shandian Zhe

However, the performance of multi-domain PINNs is sensitive to the choice of the interface conditions.

no code implementations • 1 Aug 2022 • Tan Nguyen, Richard G. Baraniuk, Robert M. Kirby, Stanley J. Osher, Bao Wang

Transformers have achieved remarkable success in sequence modeling and beyond but suffer from quadratic computational and memory complexities with respect to the length of the input sequence.

1 code implementation • 8 Jul 2022 • Shashank Subramanian, Robert M. Kirby, Michael W. Mahoney, Amir Gholami

We find that training vanilla PINNs for these problems can result in up to 70% prediction error in the solution, especially in the regime of low collocation points.

no code implementations • 1 Jul 2022 • Shibo Li, Zheng Wang, Robert M. Kirby, Shandian Zhe

Our model can interpolate and/or extrapolate the predictions to novel fidelities, which can be even higher than the fidelities of training data.

1 code implementation • 8 Apr 2022 • Jarom D. Hogue, Robert M. Kirby, Akil Narayan

Deep learning using neural networks is an effective technique for generating models of complex data.

no code implementations • 4 Feb 2022 • Martha D'Eli, Hang Deng, Cedric Fraces, Krishna Garikipati, Lori Graham-Brady, Amanda Howard, Geoerge Karniadakid, Vahid Keshavarzzadeh, Robert M. Kirby, Nathan Kutz, Chunhui Li, Xing Liu, Hannah Lu, Pania Newell, Daniel O'Malley, Masa Prodanovic, Gowri Srinivasan, Alexandre Tartakovsky, Daniel M. Tartakovsky, Hamdi Tchelepi, Bozo Vazic, Hari Viswanathan, Hongkyu Yoon, Piotr Zarzycki

The "Workshop on Machine learning in heterogeneous porous materials" brought together international scientific communities of applied mathematics, porous media, and material sciences with experts in the areas of heterogeneous materials, machine learning (ML) and applied mathematics to identify how ML can advance materials research.

no code implementations • 26 Oct 2021 • Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby

Physics-informed neural networks (PINNs) as a means of discretizing partial differential equations (PDEs) are garnering much attention in the Computational Science and Engineering (CS&E) world.

BIG-bench Machine Learning
Physics-informed machine learning
**+1**

2 code implementations • NeurIPS 2021 • Aditi S. Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M. Kirby, Michael W. Mahoney

We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned.

no code implementations • 25 Jun 2021 • Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby

Candidates for this approach are simulation methodologies for which there are fidelity differences connected with significant computational cost differences.

no code implementations • NeurIPS 2021 • Shibo Li, Robert M. Kirby, Shandian Zhe

Bayesian optimization (BO) is a powerful approach for optimizing black-box, expensive-to-evaluate functions.

no code implementations • 29 Mar 2021 • Yiming Xu, Vahid Keshavarzzadeh, Robert M. Kirby, Akil Narayan

Multifidelity approximation is an important technique in scientific computation and simulation.

no code implementations • 2 Dec 2020 • Shibo Li, Robert M. Kirby, Shandian Zhe

The training examples can be collected with different fidelities to allow a cost/accuracy trade-off.

no code implementations • 8 Jun 2019 • David Moxey, Chris D. Cantwell, Yan Bao, Andrea Cassinelli, Giacomo Castiglioni, Sehun Chun, Emilia Juda, Ehsan Kazemi, Kilian Lackhove, Julian Marcon, Gianmarco Mengaldo, Douglas Serson, Michael Turner, Hui Xu, Joaquim Peiró, Robert M. Kirby, Spencer J. Sherwin

Nektar++ is an open-source framework that provides a flexible, high-performance and scalable platform for the development of solvers for partial differential equations using the high-order spectral/$hp$ element method.

Mathematical Software Numerical Analysis Numerical Analysis Fluid Dynamics

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.