no code implementations • 3 Nov 2022 • Xi-Ci Yang, Z. Y. Xie, Xiao-Tao Yang
One is a neural network called TaylorNet, which aims to approximate the general mapping from input data to output result in terms of Taylor series directly, without resorting to any magic nonlinear activations.
no code implementations • 29 Sep 2021 • Ze-Feng Gao, Peiyu Liu, Xiao-Hui Zhang, Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen
Based on the MPS structure, we propose a new dataset compression method that compresses datasets by filtering long-range correlation information in task-agnostic scenarios and uses dataset distillation to supplement the information in task-specific scenarios.
1 code implementation • ACL 2021 • Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen
This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics.
no code implementations • 13 Jan 2021 • Rui Wang, Z. Y. Xie, Baigeng Wang, Tigran Sedrakyan
Namely, the Chern-Simons superconductor describes the planar N\'{e}el state, while the Chern-Simons exciton insulator corresponds to the non-uniform chiral spin-liquid.
Strongly Correlated Electrons
1 code implementation • 11 Apr 2019 • Ze-Feng Gao, Song Cheng, Rong-Qiang He, Z. Y. Xie, Hui-Hai Zhao, Zhong-Yi Lu, Tao Xiang
A deep neural network is a parametrization of a multilayer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations.