no code implementations • 5 Feb 2024 • Yu-Guan Hsieh, James Thornton, Eugene Ndiaye, Michal Klein, Marco Cuturi, Pierre Ablin
Beyond minimizing a single training loss, many deep learning estimation pipelines rely on an auxiliary objective to quantify and encourage desirable properties of the model (e. g. performance on another dataset, robustness, agreement with a prior).
1 code implementation • 26 Sep 2023 • Shih-Ying Yeh, Yu-Guan Hsieh, Zhidong Gao, Bernard B W Yang, Giyeong Oh, Yanmin Gong
Text-to-image generative models have garnered immense attention for their ability to produce high-fidelity images from text prompts.
no code implementations • 12 Jan 2023 • Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton, Patrick Blöbaum
In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems.
no code implementations • 13 Jun 2022 • Yu-Guan Hsieh, Kimon Antonakopoulos, Volkan Cevher, Panayotis Mertikopoulos
We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments.
no code implementations • 8 Jun 2022 • Yu-Guan Hsieh, Yassine Laguel, Franck Iutzeler, Jérôme Malick
We consider decentralized optimization problems in which a number of agents collaborate to minimize the average of their local functions by exchanging over an underlying communication graph.
no code implementations • 8 Jun 2022 • Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton
We introduce a multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of them.
no code implementations • 27 May 2021 • Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
In networks of autonomous agents (e. g., fleets of vehicles, scattered sensors), the problem of minimizing the sum of the agents' local functions has received a lot of interest.
no code implementations • 26 Apr 2021 • Yu-Guan Hsieh, Kimon Antonakopoulos, Panayotis Mertikopoulos
In game-theoretic learning, several agents are simultaneously following their individual interests, so the environment is non-stationary from each player's perspective.
no code implementations • 21 Dec 2020 • Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
In this paper, we provide a general framework for studying multi-agent online learning problems in the presence of delays and asynchronicities.
no code implementations • NeurIPS 2020 • Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning.
no code implementations • NeurIPS 2019 • Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems).
1 code implementation • ICLR 2019 • Yu-Guan Hsieh, Gang Niu, Masashi Sugiyama
In binary classification, there are situations where negative (N) data are too diverse to be fully labeled and we often resort to positive-unlabeled (PU) learning in these scenarios.