Search Results for author: Brian Swenson

Found 4 papers, 0 papers with code

Distributed Gradient Methods for Nonconvex Optimization: Local and Global Convergence Guarantees

no code implementations23 Mar 2020 Brian Swenson, Soummya Kar, H. Vincent Poor, José M. F. Moura, Aaron Jaech

We discuss local minima convergence guarantees and explore the simple but critical role of the stable-manifold theorem in analyzing saddle-point avoidance.

Optimization and Control

Distributed Stochastic Gradient Descent: Nonconvexity, Nonsmoothness, and Convergence to Local Minima

no code implementations5 Mar 2020 Brian Swenson, Ryan Murray, Soummya Kar, H. Vincent Poor

In centralized settings, it is well known that stochastic gradient descent (SGD) avoids saddle points and converges to local minima in nonconvex problems.

Optimization and Control

Clustering with Distributed Data

no code implementations1 Jan 2019 Soummya Kar, Brian Swenson

By appropriate choice of $\rho$, the set of generalized minima may be brought arbitrarily close to the set of Lloyd's minima.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.