Search Results for author: Qixuan Zhou

Found 4 papers, 0 papers with code

Demystifying Lazy Training of Neural Networks from a Macroscopic Viewpoint

no code implementations7 Apr 2024 Yuqing Li, Tao Luo, Qixuan Zhou

While NTK typically assumes that $\lim_{m\to\infty}\frac{\log \kappa}{\log m}=\frac{1}{2}$, and imposes each weight parameters to scale by the factor $\frac{1}{\sqrt{m}}$, in our theta-lazy regime, we discard the factor and relax the conditions to $\lim_{m\to\infty}\frac{\log \kappa}{\log m}>0$.

A priori Estimates for Deep Residual Network in Continuous-time Reinforcement Learning

no code implementations24 Feb 2024 Shuyu Yin, Qixuan Zhou, Fei Wen, Tao Luo

However, existing performance analyses ignores the unique characteristics of continuous-time control problems, is unable to directly estimate the generalization error of the Bellman optimal loss and require a boundedness assumption.

Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width

no code implementations24 May 2022 Hanxu Zhou, Qixuan Zhou, Zhenyuan Jin, Tao Luo, Yaoyu Zhang, Zhi-Qin John Xu

Through experiments under three-layer condition, our phase diagram suggests a complicated dynamical regimes consisting of three possible regimes, together with their mixture, for deep NNs and provides a guidance for studying deep NNs in different initialization regimes, which reveals the possibility of completely different dynamics emerging within a deep NN for its different layers.

Towards Understanding the Condensation of Neural Networks at Initial Training

no code implementations25 May 2021 Hanxu Zhou, Qixuan Zhou, Tao Luo, Yaoyu Zhang, Zhi-Qin John Xu

Our theoretical analysis confirms experiments for two cases, one is for the activation function of multiplicity one with arbitrary dimension input, which contains many common activation functions, and the other is for the layer with one-dimensional input and arbitrary multiplicity.

Cannot find the paper you are looking for? You can Submit a new open access paper.