no code implementations • NeurIPS 2020 • Oscar Chang, Lampros Flokas, Hod Lipson, Michael Spranger
We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general.
no code implementations • ICLR 2020 • Oscar Chang, Lampros Flokas, Hod Lipson
Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner.
no code implementations • 1 Jun 2023 • Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Georgios Piliouras
Although such techniques are known to allow for improved convergence guarantees in small games, it has been much harder to analyze them in more relevant settings with large populations of agents.
no code implementations • 26 Aug 2021 • Yejia Liu, Weiyuan Wu, Lampros Flokas, Jiannan Wang, Eugene Wu
The SQL-based training data debugging framework has proved effective to fix this kind of issue in a non-federated learning setting.
no code implementations • NeurIPS 2021 • Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Georgios Piliouras
Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games, that we call hidden zero-sum games.
no code implementations • NeurIPS 2020 • Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Thanasis Lianeas, Panayotis Mertikopoulos, Georgios Piliouras
Understanding the behavior of no-regret dynamics in general $N$-player games is a fundamental question in online learning and game theory.
1 code implementation • 12 Apr 2020 • Weiyuan Wu, Lampros Flokas, Eugene Wu, Jiannan Wang
As the need for machine learning (ML) increases rapidly across all industry sectors, there is a significant interest among commercial database providers to support "Query 2. 0", which integrates model inference into SQL queries.
1 code implementation • NeurIPS 2019 • Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Georgios Piliouras
We consider the case of derivative-free algorithms for non-convex optimization, also known as zero order algorithms, that use only function evaluations rather than gradients.
1 code implementation • NeurIPS 2019 • Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Georgios Piliouras
We study a wide class of non-convex non-concave min-max games that generalizes over standard bilinear zero-sum games.
no code implementations • WS 2018 • Abdul Khan, P, Subhadarshi a, Jia Xu, Lampros Flokas
Furthermore, we applied ensemble learning on training models of intermediate epochs and achieved an improvement of 4. 02 BLEU points over the baseline.