Search Results for author: Chaoyue Liu

Found 6 papers, 2 papers with code

On the linearity of large non-linear models: when and why the tangent kernel is constant

no code implementations NeurIPS 2020 Chaoyue Liu, Libin Zhu, Mikhail Belkin

We show that the transition to linearity of the model and, equivalently, constancy of the (neural) tangent kernel (NTK) result from the scaling properties of the norm of the Hessian matrix of the network as a function of the network width.

Loss landscapes and optimization in over-parameterized non-linear systems and neural networks

no code implementations29 Feb 2020 Chaoyue Liu, Libin Zhu, Mikhail Belkin

The success of deep learning is due, to a large extent, to the remarkable effectiveness of gradient-based optimization methods applied to large neural networks.

Accelerating SGD with momentum for over-parameterized learning

1 code implementation ICLR 2020 Chaoyue Liu, Mikhail Belkin

This is in contrast to the classical results in the deterministic scenario, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent.

Parametrized Accelerated Methods Free of Condition Number

no code implementations28 Feb 2018 Chaoyue Liu, Mikhail Belkin

Analyses of accelerated (momentum-based) gradient descent usually assume bounded condition number to obtain exponential convergence rates.

Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI

1 code implementation Elsevier 2017 Xin Yang, Chaoyue Liu, Zhiwei Wang, Jun Yang, Hung Le Min, Liang Wang, Kwang-Ting (Tim) Cheng

Each network is trained using images of a single modality in a weakly-supervised manner by providing a set of prostate images with image-level labels indicating only the presence of PCa without priors of lesions’ locations.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.