no code implementations • 1 Oct 2024 • Gareth Hardwick, Senwei Liang, Haizhao Yang
In this paper, we introduce a new finite expression method (FEX) to solve high-dimensional partial integro-differential equations (PIDEs).
no code implementations • 13 Oct 2023 • Hardeep Bassi, Yuanran Zhu, Senwei Liang, Jia Yin, Cian C. Reeves, Vojtech Vlcek, Chao Yang
In this paper, we propose using LSTM-RNNs (Long Short-Term Memory-Recurrent Neural Networks) to learn and represent nonlinear integral operators that appear in nonlinear integro-differential equations (IDEs).
no code implementations • 27 May 2023 • Senwei Liang, Aditya N. Singh, Yuanran Zhu, David T. Limmer, Chao Yang
We propose a reinforcement learning based method to identify important configurations that connect reactant and product states along chemical reaction paths.
no code implementations • 27 Oct 2022 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, Liang Lin
The self-attention mechanism has emerged as a critical component for improving the performance of various backbone neural networks.
1 code implementation • 7 Aug 2022 • Zhongzhan Huang, Senwei Liang, Hong Zhang, Haizhao Yang, Liang Lin
The large-scale simulation of dynamical systems is critical in numerous scientific and engineering disciplines.
no code implementations • 16 Jul 2022 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang, Liang Lin
Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).
1 code implementation • 21 Jun 2022 • Senwei Liang, Haizhao Yang
Designing efficient and accurate numerical solvers for high-dimensional partial differential equations (PDEs) remains a challenging and important topic in computational science and engineering, mainly due to the "curse of dimensionality" in designing numerical schemes that scale in dimension.
Deep Reinforcement Learning Vocal Bursts Intensity Prediction
no code implementations • ICLR 2022 • Senwei Liang, Zhongzhan Huang, Hong Zhang
We propose stiffness-aware neural network (SANN), a new method for learning Hamiltonian dynamical systems from data.
no code implementations • 9 Sep 2021 • Yiqi Gu, John Harlim, Senwei Liang, Haizhao Yang
In this paper, we consider the density estimation problem associated with the stationary measure of ergodic It\^o diffusions from a discrete-time series that approximate the solutions of the stochastic differential equations.
no code implementations • 13 Jul 2021 • Zhongzhan Huang, Mingfu Liang, Senwei Liang, wei he
Deep neural networks suffer from catastrophic forgetting when learning multiple knowledge sequentially, and a growing number of approaches have been proposed to mitigate this problem.
no code implementations • 11 Jul 2021 • wei he, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang
One filter could be important according to a certain criterion, while it is unnecessary according to another one, which indicates that each criterion is only a partial view of the comprehensive "importance".
1 code implementation • 12 Jun 2021 • Senwei Liang, Shixiao W. Jiang, John Harlim, Haizhao Yang
In a well-posed elliptic PDE setting, when the hypothesis space consists of neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data.
no code implementations • 13 Jan 2021 • Senwei Liang, Liyao Lyu, Chunmei Wang, Haizhao Yang
We propose reproducing activation functions (RAFs) to improve deep learning accuracy for various applications ranging from computer vision to scientific computing.
1 code implementation • 1 Jan 2021 • Jiawei Xue, Nan Jiang, Senwei Liang, Qiyuan Pang, Takahiro Yabe, Satish V. Ukkusuri, Jianzhu Ma
We apply the method to 11, 790 urban road networks across 30 cities worldwide to measure the spatial homogeneity of road networks within each city and across different cities.
1 code implementation • 28 Nov 2020 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang
Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).
no code implementations • 13 Oct 2019 • John Harlim, Shixiao W. Jiang, Senwei Liang, Haizhao Yang
This article presents a general framework for recovering missing dynamical systems using available data and machine learning techniques.
2 code implementations • 12 Aug 2019 • Senwei Liang, Zhongzhan Huang, Mingfu Liang, Haizhao Yang
Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss.
3 code implementations • 25 May 2019 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, Haizhao Yang
Attention networks have successfully boosted the performance in various vision problems.
Ranked #142 on Image Classification on CIFAR-100 (using extra training data)
2 code implementations • 14 Nov 2018 • Senwei Liang, Yuehaw Khoo, Haizhao Yang
Overfitting frequently occurs in deep learning.
Ranked #9 on Image Classification on SVHN