no code implementations • 5 Sep 2024 • Zheyuan Hu, Nazanin Ahmadi Daryakenari, Qianli Shen, Kenji Kawaguchi, George Em Karniadakis
We demonstrate Mamba's superior performance in both interpolation and challenging extrapolation tasks.
no code implementations • 17 Jun 2024 • Zheyuan Hu, Zhongqiang Zhang, George Em Karniadakis, Kenji Kawaguchi
We introduce an innovative approach for solving high-dimensional Fokker-Planck-L\'evy (FPL) equations in modeling non-Brownian processes across disciplines such as physics, finance, and ecology.
no code implementations • 17 Jun 2024 • Zheyuan Hu, Kenji Kawaguchi, Zhongqiang Zhang, George Em Karniadakis
We validate our methods on various forward and inverse problems of fractional and tempered fractional PDEs, scaling up to 100, 000 dimensions.
no code implementations • 19 Mar 2024 • Lucy Xiaoyang Shi, Zheyuan Hu, Tony Z. Zhao, Archit Sharma, Karl Pertsch, Jianlan Luo, Sergey Levine, Chelsea Finn
In this paper, we make the following observation: high-level policies that index into sufficiently rich and expressive low-level language-conditioned skills can be readily supervised with human feedback in the form of language corrections.
no code implementations • 12 Feb 2024 • Zheyuan Hu, Zhongqiang Zhang, George Em Karniadakis, Kenji Kawaguchi
The score function, defined as the gradient of the LL, plays a fundamental role in inferring LL and PDF and enables fast SDE sampling.
no code implementations • 29 Jan 2024 • Jianlan Luo, Zheyuan Hu, Charles Xu, You Liang Tan, Jacob Berg, Archit Sharma, Stefan Schaal, Chelsea Finn, Abhishek Gupta, Sergey Levine
We posit that a significant challenge to widespread adoption of robotic RL, as well as further development of robotic RL methods, is the comparative inaccessibility of such methods.
1 code implementation • 22 Dec 2023 • Zheyuan Hu, Zekun Shi, George Em Karniadakis, Kenji Kawaguchi
We further showcase HTE's convergence to the original PINN loss and its unbiased behavior under specific conditions.
no code implementations • 26 Nov 2023 • Zheyuan Hu, Zhouhao Yang, Yezhen Wang, George Em Karniadakis, Kenji Kawaguchi
To optimize the bias-variance trade-off, we combine the two approaches in a hybrid method that balances the rapid convergence of the biased version with the high accuracy of the unbiased version.
no code implementations • 6 Sep 2023 • Zheyuan Hu, Aaron Rovinsky, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine
We demonstrate the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisition of intricate manipulation skills in the real world on a four-fingered robotic hand.
1 code implementation • 23 Jul 2023 • Zheyuan Hu, Khemraj Shukla, George Em Karniadakis, Kenji Kawaguchi
We demonstrate in various diverse tests that the proposed method can solve many notoriously hard high-dimensional PDEs, including the Hamilton-Jacobi-Bellman (HJB) and the Schr\"{o}dinger equations in tens of thousands of dimensions very fast on a single GPU using the PINNs mesh-free approach.
no code implementations • 1 Mar 2023 • Tianbo Li, Min Lin, Zheyuan Hu, Kunhao Zheng, Giovanni Vignale, Kenji Kawaguchi, A. H. Castro Neto, Kostya S. Novoselov, Shuicheng Yan
Kohn-Sham Density Functional Theory (KS-DFT) has been traditionally solved by the Self-Consistent Field (SCF) method.
no code implementations • 19 Dec 2022 • Kelvin Xu, Zheyuan Hu, Ria Doshi, Aaron Rovinsky, Vikash Kumar, Abhishek Gupta, Sergey Levine
In this paper, we describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks and enable robots with complex multi-fingered hands to learn to perform them through interaction.
1 code implementation • 16 Nov 2022 • Zheyuan Hu, Ameya D. Jagtap, George Em Karniadakis, Kenji Kawaguchi
We also show cases where XPINN is already better than PINN, so APINN can still slightly improve XPINN.
1 code implementation • 19 Jul 2022 • Elliot Dang, Zheyuan Hu, Tong Li
We build the recommenders on the Amazon US Reviews dataset, and tune the pretrained BERT and RoBERTa with the traditional fine-tuned paradigm as well as the new prompt-based learning paradigm.
no code implementations • NeurIPS 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
1 code implementation • 24 Oct 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
no code implementations • 20 Sep 2021 • Zheyuan Hu, Ameya D. Jagtap, George Em Karniadakis, Kenji Kawaguchi
Specifically, for general multi-layer PINNs and XPINNs, we first provide a prior generalization bound via the complexity of the target functions in the PDE problem, and a posterior generalization bound via the posterior matrix norms of the networks after optimization.
1 code implementation • 9 May 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts.
1 code implementation • 7 Jul 2020 • Zhongkai Hao, Chengqiang Lu, Zheyuan Hu, Hao Wang, Zhenya Huang, Qi Liu, Enhong Chen, Cheekong Lee
Here we propose a novel framework called Active Semi-supervised Graph Neural Network (ASGN) by incorporating both labeled and unlabeled molecules.