2 code implementations • 7 Nov 2019 • Shanshan Tang, Bo Li, Haijun Yu
As spectral accuracy is hard to obtain by direct training of deep neural networks, ChebNets provide a practical way to obtain spectral accuracy, it is expected to be useful in real applications that require efficient approximations of smooth functions.
1 code implementation • 6 Sep 2020 • Haijun Yu, Xinyuan Tian, Weinan E, Qianxiao Li
We further apply this method to study Rayleigh-Benard convection and learn Lorenz-like low dimensional autonomous reduced order models that capture both qualitative and quantitative properties of the underlying dynamics.
1 code implementation • 8 Aug 2023 • Xiaoli Chen, Beatrice W. Soh, Zi-En Ooi, Eleonore Vissol-Gaudin, Haijun Yu, Kostya S. Novoselov, Kedar Hippalgaonkar, Qianxiao Li
Specifically, we learn three interpretable thermodynamic coordinates and build a dynamical landscape of polymer stretching, including the identification of stable and transition states and the control of the stretching rate.
no code implementations • 4 Aug 2018 • Shanshan Tang, Haijun Yu
While it is believed that denoising is not always necessary in many big data applications, we show in this paper that denoising is helpful in urban traffic analysis by applying the method of bounded total variation denoising to the urban road traffic prediction and clustering problem.
no code implementations • 6 May 2019 • Weiwen Wu, Haijun Yu, Peijun Chen, Fulin Luo, Fenglin Liu, Qian Wang, Yining Zhu, Yanbo Zhang, Jian Feng, Hengyong Yu
Second, we employ the direct inversion (DI) method to obtain initial material decomposition results, and a set of image patches are extracted from the mode-1 unfolding of normalized material image tensor to train a united dictionary by the K-SVD technique.
no code implementations • 9 Sep 2019 • Bo Li, Shanshan Tang, Haijun Yu
In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions.
no code implementations • 14 Mar 2019 • Bo Li, Shanshan Tang, Haijun Yu
Comparing to the results on ReLU network, the sizes of RePU networks required to approximate functions in Sobolev space and Korobov space with an error tolerance $\varepsilon$, by our constructive proofs, are in general $\mathcal{O}(\log \frac{1}{\varepsilon})$ times smaller than the sizes of corresponding ReLU networks.
Numerical Analysis
no code implementations • 30 May 2023 • Zhisheng Wang, Haijun Yu, Yixing Huang, Shunli Wang, Song Ni, Zongfeng Li, Fenglin Liu, Junning Cui
Micro-computed tomography (micro-CT) is a widely used state-of-the-art instrument employed to study the morphological structures of objects in various fields.
no code implementations • 21 Sep 2023 • Zhisheng Wang, Zihan Deng, Fenglin Liu, Yixing Huang, Haijun Yu, Junning Cui
The second uses multiple networks to train different directional Hilbert filtering models for DBP images of multiple linear scannings, respectively, and then overlays the reconstructed results, i. e., Multiple Networks Overlaying (MNetO).
no code implementations • 17 Jan 2024 • Mareike Thies, Fabian Wagner, Noah Maul, Haijun Yu, Manuela Meier, Linda-Sophie Schneider, Mingxuan Gu, Siyuan Mei, Lukas Folle, Andreas Maier
The analytic Jacobian for the backprojection operation, which is at the core of the proposed method, is made publicly available.