Search Results for author: Georgios Piliouras

Found 56 papers, 13 papers with code

Scalable AI Safety via Doubly-Efficient Debate

1 code implementation23 Nov 2023 Jonah Brown-Cohen, Geoffrey Irving, Georgios Piliouras

The emergence of pre-trained AI systems with powerful capabilities across a diverse and ever-increasing set of complex domains has raised a critical challenge for AI safety as tasks can become too complicated for humans to judge directly.

A Quadratic Speedup in Finding Nash Equilibria of Quantum Zero-Sum Games

no code implementations17 Nov 2023 Francisca Vasconcelos, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Panayotis Mertikopoulos, Georgios Piliouras, Michael I. Jordan

In 2008, Jain and Watrous proposed the first classical algorithm for computing equilibria in quantum zero-sum games using the Matrix Multiplicative Weight Updates (MMWU) method to achieve a convergence rate of $\mathcal{O}(d/\epsilon^2)$ iterations to $\epsilon$-Nash equilibria in the $4^d$-dimensional spectraplex.

Stability of Multi-Agent Learning: Convergence in Network Games with Many Players

no code implementations26 Jul 2023 Aamal Hussain, Dan Leonte, Francesco Belardinelli, Georgios Piliouras

The behaviour of multi-agent learning in many player games has been shown to display complex dynamics outside of restrictive examples such as network zero-sum games.


Discovering How Agents Learn Using Few Data

no code implementations13 Jul 2023 Iosif Sakos, Antonios Varvitsiotis, Georgios Piliouras

In this work, we propose a theoretical and algorithmic framework for real-time identification of the learning dynamics that govern agent behavior using a short burst of a single system trajectory.

Decision Making

Multiplicative Updates for Online Convex Optimization over Symmetric Cones

no code implementations6 Jul 2023 Ilayda Canyakmaz, Wayne Lin, Georgios Piliouras, Antonios Varvitsiotis

We study online convex optimization where the possible actions are trace-one elements in a symmetric cone, generalizing the extensively-studied experts setup and its quantum counterpart.

Chaos persists in large-scale multi-agent learning despite adaptive learning rates

no code implementations1 Jun 2023 Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Georgios Piliouras

Although such techniques are known to allow for improved convergence guarantees in small games, it has been much harder to analyze them in more relevant settings with large populations of agents.

Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics

no code implementations23 Jan 2023 Aamal Abbas Hussain, Francesco Belardinelli, Georgios Piliouras

Achieving convergence of multiple learning agents in general $N$-player games is imperative for the development of safe and reliable machine learning (ML) algorithms and their application to autonomous systems.


Min-Max Optimization Made Simple: Approximating the Proximal Point Method via Contraction Maps

no code implementations10 Jan 2023 Volkan Cevher, Georgios Piliouras, Ryann Sim, Stratis Skoulakis

In this paper we present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems while requiring a simple and intuitive analysis.

Learning Correlated Equilibria in Mean-Field Games

no code implementations22 Aug 2022 Paul Muller, Romuald Elie, Mark Rowland, Mathieu Lauriere, Julien Perolat, Sarah Perrin, Matthieu Geist, Georgios Piliouras, Olivier Pietquin, Karl Tuyls

The designs of many large-scale systems today, from traffic routing environments to smart grids, rely on game-theoretic equilibrium concepts.

Alternating Mirror Descent for Constrained Min-Max Games

no code implementations8 Jun 2022 Andre Wibisono, Molei Tao, Georgios Piliouras

In this paper we study two-player bilinear zero-sum games with constrained strategy spaces.

Nash, Conley, and Computation: Impossibility and Incompleteness in Game Dynamics

no code implementations26 Mar 2022 Jason Milionis, Christos Papadimitriou, Georgios Piliouras, Kelly Spendlove

We also prove a stronger result for $\epsilon$-approximate Nash equilibria: there are games such that no game dynamics can converge (in an appropriate sense) to $\epsilon$-Nash equilibria, and in fact the set of such games has positive measure.

Scalable Deep Reinforcement Learning Algorithms for Mean Field Games

no code implementations22 Mar 2022 Mathieu Laurière, Sarah Perrin, Sertan Girgin, Paul Muller, Ayush Jain, Theophile Cabannes, Georgios Piliouras, Julien Pérolat, Romuald Élie, Olivier Pietquin, Matthieu Geist

One limiting factor to further scale up using RL is that existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values.

reinforcement-learning Reinforcement Learning (RL)

No-Regret Learning in Games is Turing Complete

no code implementations24 Feb 2022 Gabriel P. Andrade, Rafael Frongillo, Georgios Piliouras

Games are natural models for multi-agent machine learning settings, such as generative adversarial networks (GANs).

Multi-agent Performative Prediction: From Global Stability and Optimality to Chaos

no code implementations25 Jan 2022 Georgios Piliouras, Fang-Yi Yu

The recent framework of performative prediction is aimed at capturing settings where predictions influence the target/outcome they want to predict.

Beyond Time-Average Convergence: Near-Optimal Uncoupled Online Learning via Clairvoyant Multiplicative Weights Update

no code implementations29 Nov 2021 Georgios Piliouras, Ryann Sim, Stratis Skoulakis

This implies that the CMWU dynamics converge with rate $O(nV \log m \log T / T)$ to a \textit{Coarse Correlated Equilibrium}.

Online Learning in Periodic Zero-Sum Games

no code implementations NeurIPS 2021 Tanner Fiez, Ryann Sim, Stratis Skoulakis, Georgios Piliouras, Lillian Ratliff

Classical learning results build on this theorem to show that online no-regret dynamics converge to an equilibrium in a time-average sense in zero-sum games.

Generalized Natural Gradient Flows in Hidden Convex-Concave Games and GANs

no code implementations ICLR 2022 Andjela Mladenovic, Iosif Sakos, Gauthier Gidel, Georgios Piliouras

In the case of Fisher information geometry, we provide a complete picture of the dynamics in an interesting special setting of team competition via invariant function analysis.

Constants of Motion: The Antidote to Chaos in Optimization and Game Dynamics

no code implementations8 Sep 2021 Georgios Piliouras, Xiao Wang

Several recent works in online optimization and game dynamics have established strong negative complexity results including the formal emergence of instability and chaos even in small such settings, e. g., $2\times 2$ games.

Evolutionary Dynamics and $Φ$-Regret Minimization in Games

no code implementations28 Jun 2021 Georgios Piliouras, Mark Rowland, Shayegan Omidshafiei, Romuald Elie, Daniel Hennes, Jerome Connor, Karl Tuyls

Importantly, $\Phi$-regret enables learning agents to consider deviations from and to mixed strategies, generalizing several existing notions of regret such as external, internal, and swap regret, and thus broadening the insights gained from regret-based analysis of learning algorithms.

Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality

no code implementations NeurIPS 2021 Stefanos Leonardos, Georgios Piliouras, Kelly Spendlove

The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood.


Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence

no code implementations9 Jun 2021 Yun Kuen Cheung, Georgios Piliouras

Passivity is a fundamental concept in control theory, which abstracts energy conservation and dissipation in physical systems.

Efficient Online Learning for Dynamic k-Clustering

no code implementations8 Jun 2021 Dimitris Fotakis, Georgios Piliouras, Stratis Skoulakis

We study dynamic clustering problems from the perspective of online learning.


Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

1 code implementation NeurIPS 2021 Stefanos Leonardos, Will Overman, Ioannis Panageas, Georgios Piliouras

Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games.

Learning in Matrix Games can be Arbitrarily Complex

no code implementations5 Mar 2021 Gabriel P. Andrade, Rafael Frongillo, Georgios Piliouras

In this paper we show that, in a strong sense, this dynamic complexity is inherent to games.

BIG-bench Machine Learning

Scaling up Mean Field Games with Online Mirror Descent

1 code implementation28 Feb 2021 Julien Perolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls, Olivier Pietquin

We address scaling up equilibrium computation in Mean Field Games (MFGs) using Online Mirror Descent (OMD).

Follow-the-Regularized-Leader Routes to Chaos in Routing Games

no code implementations16 Feb 2021 Jakub Bielawski, Thiparat Chotibut, Fryderyk Falniowski, Grzegorz Kosiorowski, Michał Misiurewicz, Georgios Piliouras

We establish that, even in simple linear non-atomic congestion games with two parallel links and any fixed learning rate, unless the game is fully symmetric, increasing the population size or the scale of costs causes learning dynamics to become unstable and eventually chaotic, in the sense of Li-Yorke and positive topological entropy.

Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent

no code implementations NeurIPS 2021 Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Georgios Piliouras

Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games, that we call hidden zero-sum games.

Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games

1 code implementation15 Dec 2020 Stratis Skoulakis, Tanner Fiez, Ryann Sim, Georgios Piliouras, Lillian Ratliff

The predominant paradigm in evolutionary game theory and more generally online learning in games is based on a clear distinction between a population of dynamic agents that interact given a fixed, static game.

Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory

no code implementations5 Dec 2020 Stefanos Leonardos, Georgios Piliouras

Exploration-exploitation is a powerful and practical tool in multi-agent learning (MAL), however, its effects are far from understood.

Q-Learning Computer Science and Game Theory Multiagent Systems Dynamical Systems 93A16, 91A26, 91A68, 58K35 G.3; J.4; F.2.2

Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

1 code implementation NeurIPS 2020 Dimitris Fotakis, Thanasis Lianeas, Georgios Piliouras, Stratis Skoulakis

We consider a natural model of online preference aggregation, where sets of preferred items $R_1, R_2, \ldots, R_t$ along with a demand for $k_t$ items in each $R_t$, appear online.

Dimensionality Reduction

No-regret learning and mixed Nash equilibria: They do not mix

no code implementations NeurIPS 2020 Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Thanasis Lianeas, Panayotis Mertikopoulos, Georgios Piliouras

Understanding the behavior of no-regret dynamics in general $N$-player games is a fundamental question in online learning and game theory.

Chaos, Extremism and Optimism: Volume Analysis of Learning in Games

no code implementations NeurIPS 2020 Yun Kuen Cheung, Georgios Piliouras

We present volume analyses of Multiplicative Weights Updates (MWU) and Optimistic Multiplicative Weights Updates (OMWU) in zero-sum as well as coordination games.

Efficiently avoiding saddle points with zero order methods: No gradients required

1 code implementation NeurIPS 2019 Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Georgios Piliouras

We consider the case of derivative-free algorithms for non-convex optimization, also known as zero order algorithms, that use only function evaluations rather than gradients.

A Peek into the Unobservable: Hidden States and Bayesian Inference for the Bitcoin and Ether Price Series

no code implementations24 Sep 2019 Constandina Koki, Stefanos Leonardos, Georgios Piliouras

Conventional financial models fail to explain the economic and monetary properties of cryptocurrencies due to the latter's dual nature: their usage as financial assets on the one side and their tight connection to the underlying blockchain structure on the other.

Bayesian Inference

Multiagent Evaluation under Incomplete Information

1 code implementation NeurIPS 2019 Mark Rowland, Shayegan Omidshafiei, Karl Tuyls, Julien Perolat, Michal Valko, Georgios Piliouras, Remi Munos

This paper investigates the evaluation of learned multiagent strategies in the incomplete information setting, which plays a critical role in ranking and training of agents.

Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent

no code implementations9 Jul 2019 James P. Bailey, Gauthier Gidel, Georgios Piliouras

Gradient descent is arguably one of the most popular online optimization methods with a wide array of applications.

Computer Science and Game Theory Dynamical Systems Optimization and Control

Fast and Furious Learning in Zero-Sum Games: Vanishing Regret with Non-Vanishing Step Sizes

no code implementations NeurIPS 2019 James P. Bailey, Georgios Piliouras

We show for the first time, to our knowledge, that it is possible to reconcile in online learning in zero-sum games two seemingly contradictory objectives: vanishing time-average regret and non-vanishing step sizes.

Optimistic mirror descent in saddle-point problems: Going the extra(-gradient) mile

no code implementations ICLR 2019 Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, Georgios Piliouras

Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.

Multi-Agent Learning in Network Zero-Sum Games is a Hamiltonian System

no code implementations5 Mar 2019 James P. Bailey, Georgios Piliouras

Specifically, we show that no matter the size, or network structure of such closed economies, even if agents use different online learning dynamics from the standard class of Follow-the-Regularized-Leader, they yield Hamiltonian dynamics.

α-Rank: Multi-Agent Evaluation by Evolution

1 code implementation4 Mar 2019 Shayegan Omidshafiei, Christos Papadimitriou, Georgios Piliouras, Karl Tuyls, Mark Rowland, Jean-Baptiste Lespiau, Wojciech M. Czarnecki, Marc Lanctot, Julien Perolat, Remi Munos

We introduce {\alpha}-Rank, a principled evolutionary dynamics methodology, for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs).

Mathematical Proofs

Short-distance commuters in the smart city

1 code implementation16 Feb 2019 Francisco Benita, Garvit Bansal, Georgios Piliouras, Bige Tunçer

This study models and examines commuter's preferences for short-distance transportation modes, namely: walking, taking a bus or riding a metro.

Venn GAN: Discovering Commonalities and Particularities of Multiple Distributions

1 code implementation9 Feb 2019 Yasin Yazici, Bruno Lecouat, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, Vijay Chandrasekhar

We propose a GAN design which models multiple distributions effectively and discovers their commonalities and particularities.

Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile

no code implementations7 Jul 2018 Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, Georgios Piliouras

Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.

Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos

no code implementations NeurIPS 2017 Gerasimos Palaiopanos, Ioannis Panageas, Georgios Piliouras

Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action $\gamma$ is multiplied by $(1 -\epsilon)^{C(\gamma)}$ even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior.

First-order Methods Almost Always Avoid Saddle Points

no code implementations20 Oct 2017 Jason D. Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael. I. Jordan, Benjamin Recht

We establish that first-order methods avoid saddle points for almost all initializations.

Cycles in adversarial regularized learning

no code implementations8 Sep 2017 Panayotis Mertikopoulos, Christos Papadimitriou, Georgios Piliouras

Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science.

Learning Agents in Black-Scholes Financial Markets: Consensus Dynamics and Volatility Smiles

no code implementations25 Apr 2017 Tushar Vaidya, Carlos Murguia, Georgios Piliouras

Black-Scholes (BS) is the standard mathematical model for option pricing in financial markets.

Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions

no code implementations2 May 2016 Ioannis Panageas, Georgios Piliouras

Given a non-convex twice differentiable cost function f, we prove that the set of initial conditions so that gradient descent converges to saddle points where \nabla^2 f has at least one strictly negative eigenvalue has (Lebesgue) measure zero, even for cost functions f with non-isolated critical points, answering an open question in [Lee, Simchowitz, Jordan, Recht, COLT2016].

Open-Ended Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.