no code implementations • 28 Nov 2023 • Mohammad Reza Karimi, Ya-Ping Hsieh, Andreas Krause
Many problems in machine learning can be formulated as solving entropy-regularized optimal transport on the space of probability measures.
1 code implementation • 15 Jun 2023 • Matteo Pariset, Ya-Ping Hsieh, Charlotte Bunne, Andreas Krause, Valentin De Bortoli
Schr\"odinger bridges (SBs) provide an elegant framework for modeling the temporal evolution of populations in physical, chemical, or biological systems.
2 code implementations • 22 Feb 2023 • Vignesh Ram Somnath, Matteo Pariset, Ya-Ping Hsieh, Maria Rodriguez Martinez, Andreas Krause, Charlotte Bunne
Diffusion Schr\"odinger bridges (DSB) have recently emerged as a powerful framework for recovering stochastic dynamics via their marginal observations at different time points.
no code implementations • 14 Jul 2022 • Tatjana Chavdarova, Ya-Ping Hsieh, Michael I. Jordan
Algorithms that solve zero-sum games, multi-objective agent objectives, or, more generally, variational inequality (VI) problems are notoriously unstable on general problems.
no code implementations • 14 Jun 2022 • Mohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, Andreas Krause
We examine a wide class of stochastic approximation algorithms for solving (stochastic) nonlinear problems on Riemannian manifolds.
no code implementations • 8 Jun 2022 • Panayotis Mertikopoulos, Ya-Ping Hsieh, Volkan Cevher
We develop a flexible stochastic approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite).
no code implementations • 11 Feb 2022 • Charlotte Bunne, Ya-Ping Hsieh, Marco Cuturi, Andreas Krause
The static optimal transport $(\mathrm{OT})$ problem between Gaussians seeks to recover an optimal map, or more generally a coupling, to morph a Gaussian into another.
no code implementations • ICML 2020 • Maria-Luiza Vladarean, Ahmet Alacaoglu, Ya-Ping Hsieh, Volkan Cevher
We propose two novel conditional gradient-based methods for solving structured stochastic convex optimization problems with a large number of linear constraints.
no code implementations • 16 Jun 2020 • Ya-Ping Hsieh, Panayotis Mertikopoulos, Volkan Cevher
Compared to ordinary function minimization problems, min-max optimization algorithms encounter far greater challenges because of the existence of periodic cycles and similar phenomena.
1 code implementation • 14 Feb 2020 • Parameswaran Kamalaruban, Yu-Ting Huang, Ya-Ping Hsieh, Paul Rolland, Cheng Shi, Volkan Cevher
We introduce a sampling perspective to tackle the challenging task of training robust Reinforcement Learning (RL) agents.
no code implementations • ICLR 2019 • Ya-Ping Hsieh, Chen Liu, Volkan Cevher
We reconsider the training objective of Generative Adversarial Networks (GANs) from the mixed Nash Equilibria (NE) perspective.
no code implementations • ICML 2018 • Ehsan Asadi Kangarshahi, Ya-Ping Hsieh, Mehmet Fatih Sahin, Volkan Cevher
We propose a simple algorithmic framework that simultaneously achieves the best rates for honest regret as well as adversarial regret, and in addition resolves the open problem of removing the logarithmic terms in convergence to the value of the game.
no code implementations • NeurIPS 2018 • Ya-Ping Hsieh, Ali Kavis, Paul Rolland, Volkan Cevher
We consider the problem of sampling from constrained distributions, which has posed significant challenges to both non-asymptotic analysis and algorithmic design.
no code implementations • 26 Feb 2018 • Ya-Ping Hsieh, Volkan Cevher
Information concentration of probability measures have important implications in learning theory.
no code implementations • NeurIPS 2015 • David E. Carlson, Edo Collins, Ya-Ping Hsieh, Lawrence Carin, Volkan Cevher
These challenges include, but are not limited to, the non-convexity of learning objectives and estimating the quantities needed for optimization algorithms, such as gradients.