You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 11 Oct 2022 • Vaggos Chatziafratis, Ioannis Panageas, Clayton Sanford, Stelios Andrew Stavroulakis

Recurrent Neural Networks (RNNs) frequently exhibit complicated dynamics, and their sensitivity to the initialization process often renders them notoriously hard to train.

no code implementations • 3 Aug 2022 • Fivos Kalogiannis, Ioannis Anagnostides, Ioannis Panageas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Vaggos Chatziafratis, Stelios Stavroulakis

In this work, we depart from those prior results by investigating infinite-horizon \emph{adversarial team Markov games}, a natural and well-motivated class of games in which a team of identically-interested players -- in the absence of any explicit coordination or communication -- is competing against an adversarial player.

no code implementations • 25 Apr 2022 • Yi Feng, Ioannis Panageas, Xiao Wang

We consider non-convex optimization problems with constraint that is a product of simplices.

no code implementations • 7 Nov 2021 • Fivos Kalogiannis, Ioannis Panageas, Emmanouil-Vasileios Vlatakis-Gkaragkounis

Motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, we focus on min-max optimization in team zero-sum games.

no code implementations • 20 Oct 2021 • Roy Fox, Stephen Mcaleer, Will Overman, Ioannis Panageas

Recent results have shown that independent policy gradient converges in MPGs but it was not known whether Independent Natural Policy Gradient converges in MPGs as well.

no code implementations • 29 Sep 2021 • Fivos Kalogiannis, Ioannis Panageas, Emmanouil-Vasileios Vlatakis-Gkaragkounis

Motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, we focus on min-max optimization in team zero-sum games.

no code implementations • NeurIPS 2021 • Stefanos Leonardos, Will Overman, Ioannis Panageas, Georgios Piliouras

Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games.

no code implementations • NeurIPS 2020 • Xiao Wang, Qi Lei, Ioannis Panageas

Sampling is a fundamental and arguably very important task with numerous applications in Machine Learning.

no code implementations • 17 Jun 2020 • Arnab Bhattacharyya, Rathin Desai, Sai Ganesh Nagarajan, Ioannis Panageas

We show that ${\mu}$ and ${\Sigma}$ can be estimated with error $\epsilon$ in the Frobenius norm, using $\tilde{O}\left(\frac{\textrm{nz}({\Sigma}^{-1})}{\epsilon^2}\right)$ samples from a truncated $\mathcal{N}({\mu},{\Sigma})$ and having access to a membership oracle for $S$.

no code implementations • 18 Mar 2020 • Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas

In this work we study extensions of these to models with higher-order sufficient statistics, modeling behavior on a social network with peer-group effects.

no code implementations • ICML 2020 • Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas

The expressivity of neural networks as a function of their depth, width and type of activation units has been an important question in deep learning theory.

no code implementations • 26 Feb 2020 • Ioannis Panageas, Stratis Skoulakis, Antonios Varvitsiotis, Xiao Wang

Non-negative matrix factorization (NMF) is a fundamental non-convex optimization problem with numerous applications in Machine Learning (music analysis, document clustering, speech-source separation etc).

no code implementations • 17 Feb 2020 • Qi Lei, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang

In a recent series of papers it has been established that variants of Gradient Descent/Ascent and Mirror Descent exhibit last iterate convergence in convex-concave zero-sum games.

no code implementations • ICLR 2020 • Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang

Motivated by our observation that the triangle waves used in Telgarsky's work contain points of period 3 - a period that is special in that it implies chaotic behavior based on the celebrated result by Li-Yorke - we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth.

no code implementations • 8 May 2019 • Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas

The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates.

no code implementations • 19 Feb 2019 • Sai Ganesh Nagarajan, Ioannis Panageas

Moreover, for $d>1$ we show EM almost surely converges to the true mean for any measurable set $S$ when the map of EM has only three fixed points, namely $-\vec{\mu}, \vec{0}, \vec{\mu}$ (covariance matrix $\vec{\Sigma}$ is known), and prove local convergence if there are more than three fixed points.

no code implementations • 11 Jul 2018 • Constantinos Daskalakis, Ioannis Panageas

Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al \cite{DISZ17} and follow-up work of Liang and Stokes \cite{LiangS18} have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in {\em unconstrained} convex-concave min-max optimization problems.

no code implementations • NeurIPS 2018 • Constantinos Daskalakis, Ioannis Panageas

Motivated by applications in Optimization, Game Theory, and the training of Generative Adversarial Networks, the convergence properties of first order methods in min-max problems have received extensive study.

no code implementations • NeurIPS 2017 • Gerasimos Palaiopanos, Ioannis Panageas, Georgios Piliouras

Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action $\gamma$ is multiplied by $(1 -\epsilon)^{C(\gamma)}$ even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior.

no code implementations • 20 Oct 2017 • Jason D. Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael. I. Jordan, Benjamin Recht

We establish that first-order methods avoid saddle points for almost all initializations.

no code implementations • 2 May 2016 • Ioannis Panageas, Georgios Piliouras

Given a non-convex twice differentiable cost function f, we prove that the set of initial conditions so that gradient descent converges to saddle points where \nabla^2 f has at least one strictly negative eigenvalue has (Lebesgue) measure zero, even for cost functions f with non-isolated critical points, answering an open question in [Lee, Simchowitz, Jordan, Recht, COLT2016].

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.