no code implementations • 26 Aug 2024 • Sheel Nidhan, Haoliang Jiang, Lalit Ghule, Clancy Umphrey, Rishikesh Ranade, Jay Pathak
In this paper, we propose a domain-decomposition-based deep learning (DL) framework, named transient-CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs).
no code implementations • 20 May 2024 • Zongren Zou, Adar Kahana, Enrui Zhang, Eli Turkel, Rishikesh Ranade, Jay Pathak, George Em Karniadakis
We extend a recently proposed machine-learning-based iterative solver, i. e. the hybrid iterative transferable solver (HINTS), to solve the scattering problem described by the Helmholtz equation in an exterior domain with a complex absorbing boundary condition.
no code implementations • 23 Feb 2024 • Priyesh Kakka, Sheel Nidhan, Rishikesh Ranade, Jonathan F. MacArt
In this study, we introduce a domain-decomposition-based distributed training and inference approach for message-passing neural networks (MPNN).
no code implementations • 19 Jun 2023 • Rucha Apte, Sheel Nidhan, Rishikesh Ranade, Jay Pathak
In a preliminary attempt to address the problem of data scarcity in physics-based machine learning, we introduce a novel methodology for data generation in physics-based simulations.
no code implementations • 4 Nov 2022 • Lalit Ghule, Rishikesh Ranade, Jay Pathak
In recent years, Machine learning (ML) techniques developed for Natural Language Processing (NLP) have permeated into developing better computer vision algorithms.
no code implementations • 11 Oct 2022 • Rishikesh Ranade, Chris Hill, Lalit Ghule, Jay Pathak
The numerical experiments show that our approach outperforms ML baselines in terms of 1) accuracy across quantitative metrics and 2) generalization to out-of-distribution conditions as well as domain sizes.
no code implementations • 10 Sep 2022 • Rishikesh Ranade, Haiyang He, Jay Pathak, Norman Chang, Akhilesh Kumar, Jimin Wen
Thermal analysis provides deeper insights into electronic chips behavior under different temperature scenarios and enables faster design exploration.
no code implementations • 28 Aug 2022 • Enrui Zhang, Adar Kahana, Alena Kopaničáková, Eli Turkel, Rishikesh Ranade, Jay Pathak, George Em Karniadakis
Neural networks suffer from spectral bias having difficulty in representing the high frequency components of a function while relaxation methods can resolve high frequencies efficiently but stall at moderate to low frequencies.
no code implementations • 7 Oct 2021 • Rishikesh Ranade, Chris Hill, Haiyang He, Amir Maleki, Norman Chang, Jay Pathak
Numerical simulations for engineering applications solve partial differential equations (PDE) to model various physical processes.
no code implementations • 29 Sep 2021 • Rishikesh Ranade, Derek Christopher Hill, Haiyang He, Amir Maleki, Norman Chang, Jay Pathak
Numerical simulations for engineering applications solve partial differential equations (PDE) to model various physical processes.
1 code implementation • ICLR Workshop GTRL 2021 • Amir Maleki, Jan Heyse, Rishikesh Ranade, Haiyang He, Priya Kasimbeg, Jay Pathak
We present a notion of geometry encoding suitable for machine learning-based numerical simulation.
no code implementations • 6 Apr 2021 • Rishikesh Ranade, Chris Hill, Haiyang He, Amir Maleki, Jay Pathak
In this work we propose a hybrid solver to solve partial differential equation (PDE)s in the latent space.
no code implementations • 6 Apr 2021 • Anran Jiao, Haiyang He, Rishikesh Ranade, Jay Pathak, Lu Lu
Learning and solving governing equations of a physical system, represented by partial differential equations (PDEs), from data is a central challenge in a variety of areas of science and engineering.
no code implementations • 5 Apr 2021 • Rishikesh Ranade, Kevin Gitushi, Tarek Echekki
The DeepONet is a machine learning model that is parameterized on the unconditional means of PCs at a given spatial location and discrete PC coordinates and predicts the joint probability density value for the corresponding PC coordinate.
no code implementations • 21 Mar 2021 • Rishikesh Ranade, Jay Pathak
In machine learning applications, 3-D surfaces are most suitably represented with point clouds or meshes and learning representations of interacting geometries form point-based representations is challenging.
no code implementations • 9 Dec 2020 • Jaydeep Rade, Aditya Balu, Ethan Herron, Jay Pathak, Rishikesh Ranade, Soumik Sarkar, Adarsh Krishnamurthy
We achieve this by training multiple networks, each learning a different step of the overall topology optimization methodology, making the framework more consistent with the topology optimization algorithm.
no code implementations • 18 May 2020 • Rishikesh Ranade, Genong Li, Shaoping Li, Tarek Echekki
In this work, we address these issues by introducing an adaptive training algorithm that relies on multi-layer perception (MLP) neural networks for regression and self-organizing maps (SOMs) for clustering data to tabulate using different networks.
no code implementations • 17 May 2020 • Rishikesh Ranade, Chris Hill, Jay Pathak
The two solver characteristics that have been adopted in this work are: 1) the use of discretization-based schemes to approximate spatio-temporal partial derivatives and 2) the use of iterative algorithms to solve linearized PDEs in their discrete form.