Search Results for author: Yu-Guan Hsieh

Found 12 papers, 2 papers with code

Careful with that Scalpel: Improving Gradient Surgery with an EMA

no code implementations5 Feb 2024 Yu-Guan Hsieh, James Thornton, Eugene Ndiaye, Michal Klein, Marco Cuturi, Pierre Ablin

Beyond minimizing a single training loss, many deep learning estimation pipelines rely on an auxiliary objective to quantify and encourage desirable properties of the model (e. g. performance on another dataset, robustness, agreement with a prior).

Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation

1 code implementation26 Sep 2023 Shih-Ying Yeh, Yu-Guan Hsieh, Zhidong Gao, Bernard B W Yang, Giyeong Oh, Yanmin Gong

Text-to-image generative models have garnered immense attention for their ability to produce high-fidelity images from text prompts.

Thompson Sampling with Diffusion Generative Prior

no code implementations12 Jan 2023 Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton, Patrick Blöbaum

In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems.

Decision Making Denoising +2

No-Regret Learning in Games with Noisy Feedback: Faster Rates and Adaptivity via Learning Rate Separation

no code implementations13 Jun 2022 Yu-Guan Hsieh, Kimon Antonakopoulos, Volkan Cevher, Panayotis Mertikopoulos

We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments.

Push--Pull with Device Sampling

no code implementations8 Jun 2022 Yu-Guan Hsieh, Yassine Laguel, Franck Iutzeler, Jérôme Malick

We consider decentralized optimization problems in which a number of agents collaborate to minimize the average of their local functions by exchanging over an underlying communication graph.

Uplifting Bandits

no code implementations8 Jun 2022 Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton

We introduce a multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of them.

Marketing Recommendation Systems

Optimization in Open Networks via Dual Averaging

no code implementations27 May 2021 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In networks of autonomous agents (e. g., fleets of vehicles, scattered sensors), the problem of minimizing the sum of the agents' local functions has received a lot of interest.

Distributed Optimization

Adaptive Learning in Continuous Games: Optimal Regret Bounds and Convergence to Nash Equilibrium

no code implementations26 Apr 2021 Yu-Guan Hsieh, Kimon Antonakopoulos, Panayotis Mertikopoulos

In game-theoretic learning, several agents are simultaneously following their individual interests, so the environment is non-stationary from each player's perspective.

Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism

no code implementations21 Dec 2020 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In this paper, we provide a general framework for studying multi-agent online learning problems in the presence of delays and asynchronicities.

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling

no code implementations NeurIPS 2020 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning.

On the convergence of single-call stochastic extra-gradient methods

no code implementations NeurIPS 2019 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems).

Classification from Positive, Unlabeled and Biased Negative Data

1 code implementation ICLR 2019 Yu-Guan Hsieh, Gang Niu, Masashi Sugiyama

In binary classification, there are situations where negative (N) data are too diverse to be fully labeled and we often resort to positive-unlabeled (PU) learning in these scenarios.

Binary Classification Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.