Search Results for author: Ioannis Panageas

Found 15 papers, 0 papers with code

Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

no code implementations3 Jun 2021 Stefanos Leonardos, Will Overman, Ioannis Panageas, Georgios Piliouras

Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games.

Efficient Statistics for Sparse Graphical Models from Truncated Samples

no code implementations17 Jun 2020 Arnab Bhattacharyya, Rathin Desai, Sai Ganesh Nagarajan, Ioannis Panageas

We show that ${\mu}$ and ${\Sigma}$ can be estimated with error $\epsilon$ in the Frobenius norm, using $\tilde{O}\left(\frac{\textrm{nz}({\Sigma}^{-1})}{\epsilon^2}\right)$ samples from a truncated $\mathcal{N}({\mu},{\Sigma})$ and having access to a membership oracle for $S$.

Logistic-Regression with peer-group effects via inference in higher order Ising models

no code implementations18 Mar 2020 Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas

In this work we study extensions of these to models with higher-order sufficient statistics, modeling behavior on a social network with peer-group effects.

Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems

no code implementations ICML 2020 Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas

The expressivity of neural networks as a function of their depth, width and type of activation units has been an important question in deep learning theory.

Learning Theory

Convergence to Second-Order Stationarity for Non-negative Matrix Factorization: Provably and Concurrently

no code implementations26 Feb 2020 Ioannis Panageas, Stratis Skoulakis, Antonios Varvitsiotis, Xiao Wang

Non-negative matrix factorization (NMF) is a fundamental non-convex optimization problem with numerous applications in Machine Learning (music analysis, document clustering, speech-source separation etc).

Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes

no code implementations17 Feb 2020 Qi Lei, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang

In a recent series of papers it has been established that variants of Gradient Descent/Ascent and Mirror Descent exhibit last iterate convergence in convex-concave zero-sum games.

Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem

no code implementations ICLR 2020 Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang

Motivated by our observation that the triangle waves used in Telgarsky's work contain points of period 3 - a period that is special in that it implies chaotic behavior based on the celebrated result by Li-Yorke - we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth.

Regression from Dependent Observations

no code implementations8 May 2019 Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas

The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates.

On the Analysis of EM for truncated mixtures of two Gaussians

no code implementations19 Feb 2019 Sai Ganesh Nagarajan, Ioannis Panageas

Moreover, for $d>1$ we show EM almost surely converges to the true mean for any measurable set $S$ when the map of EM has only three fixed points, namely $-\vec{\mu}, \vec{0}, \vec{\mu}$ (covariance matrix $\vec{\Sigma}$ is known), and prove local convergence if there are more than three fixed points.

Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization

no code implementations11 Jul 2018 Constantinos Daskalakis, Ioannis Panageas

Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al \cite{DISZ17} and follow-up work of Liang and Stokes \cite{LiangS18} have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in {\em unconstrained} convex-concave min-max optimization problems.

The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization

no code implementations NeurIPS 2018 Constantinos Daskalakis, Ioannis Panageas

Motivated by applications in Optimization, Game Theory, and the training of Generative Adversarial Networks, the convergence properties of first order methods in min-max problems have received extensive study.

Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos

no code implementations NeurIPS 2017 Gerasimos Palaiopanos, Ioannis Panageas, Georgios Piliouras

Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action $\gamma$ is multiplied by $(1 -\epsilon)^{C(\gamma)}$ even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior.

First-order Methods Almost Always Avoid Saddle Points

no code implementations20 Oct 2017 Jason D. Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael. I. Jordan, Benjamin Recht

We establish that first-order methods avoid saddle points for almost all initializations.

Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions

no code implementations2 May 2016 Ioannis Panageas, Georgios Piliouras

Given a non-convex twice differentiable cost function f, we prove that the set of initial conditions so that gradient descent converges to saddle points where \nabla^2 f has at least one strictly negative eigenvalue has (Lebesgue) measure zero, even for cost functions f with non-isolated critical points, answering an open question in [Lee, Simchowitz, Jordan, Recht, COLT2016].

Cannot find the paper you are looking for? You can Submit a new open access paper.