no code implementations • 20 Feb 2023 • Xinquan Huang, Wenlei Shi, Qi Meng, Yue Wang, Xiaotian Gao, Jia Zhang, Tie-Yan Liu
Neural networks have shown great potential in accelerating the solution of partial differential equations (PDEs).
no code implementations • 25 Aug 2022 • Zhengyi Li, Cong Guo, Zhanda Zhu, Yangjie Zhou, Yuxian Qiu, Xiaotian Gao, Jingwen Leng, Minyi Guo
To deal with the runtime overhead, we use a coarse-grained version of the border function.
no code implementations • 19 Jun 2022 • Xinquan Huang, Wenlei Shi, Xiaotian Gao, Xinran Wei, Jia Zhang, Jiang Bian, Mao Yang, Tie-Yan Liu
We investigate the physical information in the MSR loss, which we called long-range entanglements, and identify the challenge that the neural network requires the capacity to model the long-range entanglements in the spatial domain of the PDE, whose patterns vary in different PDEs.
1 code implementation • 18 Feb 2022 • Di He, Shanda Li, Wenlei Shi, Xiaotian Gao, Jia Zhang, Jiang Bian, LiWei Wang, Tie-Yan Liu
In this work, we develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
1 code implementation • ICLR 2022 • Cong Guo, Yuxian Qiu, Jingwen Leng, Xiaotian Gao, Chen Zhang, Yunxin Liu, Fan Yang, Yuhao Zhu, Minyi Guo
This paper proposes an on-the-fly DFQ framework with sub-second quantization time, called SQuant, which can quantize networks on inference-only devices with low computation and memory requirements.
1 code implementation • 6 Aug 2021 • Yuge Zhang, Quanlu Zhang, Li Lyna Zhang, Yaming Yang, Chenqian Yan, Xiaotian Gao, Yuqing Yang
One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures.
no code implementations • 10 Jun 2020 • Xiaotian Gao, Cui Wei, Lintao Zhang, Mao Yang
Training and inference efficiency of deep neural networks highly rely on the performance of tensor operators on hardware platforms.