Search Results for author: Alexander Shevchenko

Found 4 papers, 0 papers with code

Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth

no code implementations7 Feb 2024 Kevin Kögler, Alexander Shevchenko, Hamed Hassani, Marco Mondelli

For the prototypical case of the 1-bit compression of sparse Gaussian data, we prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.

Data Compression Denoising

Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks

no code implementations3 Nov 2021 Alexander Shevchenko, Vyacheslav Kungurtsev, Marco Mondelli

Understanding the properties of neural networks trained via stochastic gradient descent (SGD) is at the heart of the theory of deep learning.

Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks

no code implementations ICML 2020 Alexander Shevchenko, Marco Mondelli

In this paper, we shed light on this phenomenon: we show that the combination of stochastic gradient descent (SGD) and over-parameterization makes the landscape of multilayer neural networks approximately connected and thus more favorable to optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.