Search Results for author: Hanwen Wang

Found 8 papers, 5 papers with code

From virtual patients to digital twins in immuno-oncology: lessons learned from mechanistic quantitative systems pharmacology modeling

no code implementations5 Mar 2024 Hanwen Wang, Theinmozhi Arulraj, Alberto Ippolito, Aleksander S. Popel

Virtual patients and digital patients/twins are two similar concepts gaining increasing attention in health care with goals to accelerate drug development and improve patients' survival, but with their own limitations.

A Human-Machine Joint Learning Framework to Boost Endogenous BCI Training

no code implementations25 Aug 2023 Hanwen Wang, Yu Qi, Lin Yao, Yueming Wang, Dario Farina, Gang Pan

Then a human-machine joint learning framework is proposed: 1) for the human side, we model the learning process in a sequential trial-and-error scenario and propose a novel ``copy/new'' feedback paradigm to help shape the signal generation of the subject toward the optimal distribution; 2) for the machine side, we propose a novel adaptive learning algorithm to learn an optimal signal distribution along with the subject's learning process.

EEG Motor Imagery

An Expert's Guide to Training Physics-informed Neural Networks

1 code implementation16 Aug 2023 Sifan Wang, Shyam Sankaran, Hanwen Wang, Paris Perdikaris

Physics-informed neural networks (PINNs) have been popularized as a deep learning framework that can seamlessly synthesize observational data and partial differential equation (PDE) constraints.

Random Weight Factorization Improves the Training of Continuous Neural Representations

1 code implementation3 Oct 2022 Sifan Wang, Hanwen Wang, Jacob H. Seidman, Paris Perdikaris

Continuous neural representations have recently emerged as a powerful and flexible alternative to classical discretized representations of signals.

Inverse Rendering

Improved architectures and training algorithms for deep operator networks

1 code implementation4 Oct 2021 Sifan Wang, Hanwen Wang, Paris Perdikaris

In this work we analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory, and reveal a bias that favors the approximation of functions with larger magnitudes.

Operator learning

Enhancing the trainability and expressivity of deep MLPs with globally orthogonal initialization

no code implementations NeurIPS Workshop DLDE 2021 Hanwen Wang, Isabelle Crawford-Eng, Paris Perdikaris

Multilayer Perceptrons (MLPs) defines a fundamental model class that forms the backbone of many modern deep learning architectures.

Learning the solution operator of parametric partial differential equations with physics-informed DeepOnets

2 code implementations19 Mar 2021 Sifan Wang, Hanwen Wang, Paris Perdikaris

Deep operator networks (DeepONets) are receiving increased attention thanks to their demonstrated capability to approximate nonlinear operators between infinite-dimensional Banach spaces.

On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks

1 code implementation18 Dec 2020 Sifan Wang, Hanwen Wang, Paris Perdikaris

Physics-informed neural networks (PINNs) are demonstrating remarkable promise in integrating physical models with gappy and noisy observational data, but they still struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.

Cannot find the paper you are looking for? You can Submit a new open access paper.