Search Results for author: Tianyi Lin

Found 34 papers, 4 papers with code

Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback

no code implementations21 Oct 2023 Michael I. Jordan, Tianyi Lin, Zhengyuan Zhou

Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions: (1) in the single-agent setting, it achieves an optimal regret of $\Theta(\log T)$ for strongly convex cost functions; and (2) in the multi-agent setting of strongly monotone games, with each agent employing OGD, we obtain last-iterate convergence of the joint action to a unique Nash equilibrium at an optimal rate of $\Theta(\frac{1}{T})$.

A Specialized Semismooth Newton Method for Kernel-Based Optimal Transport

no code implementations21 Oct 2023 Tianyi Lin, Marco Cuturi, Michael I. Jordan

Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.

Curvature-Independent Last-Iterate Convergence for Games on Riemannian Manifolds

no code implementations29 Jun 2023 Yang Cai, Michael I. Jordan, Tianyi Lin, Argyris Oikonomou, Emmanouil-Vasileios Vlatakis-Gkaragkounis

Numerous applications in machine learning and data analytics can be formulated as equilibrium computation over Riemannian manifolds.

Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee

no code implementations23 Oct 2022 Tianyi Lin, Panayotis Mertikopoulos, Michael I. Jordan

We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of \emph{convex-concave} unconstrained min-max optimization problems.

Second-order methods

Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization

no code implementations12 Sep 2022 Tianyi Lin, Zeyu Zheng, Michael I. Jordan

Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles.

Decision Making

Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding

3 code implementations4 Jul 2022 Leonid Boytsov, David Akinpelu, Tianyi Lin, Fangwei Gao, Yutian Zhao, Jeffrey Huang, Eric Nyberg

Most other models had poor zero-shot performance (sometimes at a random baseline level) but outstripped MaxP by as much 13-28\% after finetuning.

Benchmarking Document Ranking

First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces

no code implementations4 Jun 2022 Michael I. Jordan, Tianyi Lin, Emmanouil-Vasileios Vlatakis-Gkaragkounis

From optimal transport to robust dimensionality reduction, a plethora of machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds.

Dimensionality Reduction

Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback

no code implementations15 May 2022 Tianyi Lin, Aldo Pacchiano, Yaodong Yu, Michael I. Jordan

Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings.

Bayesian Optimization

Perseus: A Simple and Optimal High-Order Method for Variational Inequalities

no code implementations6 May 2022 Tianyi Lin, Michael. I. Jordan

Our method with restarting attains a linear rate for smooth and uniformly monotone VIs and a local superlinear rate for smooth and strongly monotone VIs.

Vocal Bursts Intensity Prediction

First-Order Algorithms for Nonlinear Generalized Nash Equilibrium Problems

no code implementations7 Apr 2022 Michael I. Jordan, Tianyi Lin, Manolis Zampetakis

We consider the problem of computing an equilibrium in a class of \textit{nonlinear generalized Nash equilibrium problems (NGNEPs)} in which the strategy sets for each player are defined by equality and inequality constraints that may depend on the choices of rival players.

Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with Bandit Feedback

1 code implementation6 Dec 2021 Wenjia Ba, Tianyi Lin, Jiawei Zhang, Zhengyuan Zhou

Leveraging self-concordant barrier functions, we first construct a new bandit learning algorithm and show that it achieves the single-agent optimal regret of $\tilde{\Theta}(n\sqrt{T})$ under smooth and strongly concave reward functions ($n \geq 1$ is the problem dimension).

Fast Distributionally Robust Learning with Variance Reduced Min-Max Optimization

no code implementations27 Apr 2021 Yaodong Yu, Tianyi Lin, Eric Mazumdar, Michael I. Jordan

Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distribution shifts that arise from phenomena such as selection bias or nonstationarity.

BIG-bench Machine Learning Selection bias

A Variational Inequality Approach to Bayesian Regression Games

no code implementations24 Mar 2021 Wenshuo Guo, Michael I. Jordan, Tianyi Lin

Bayesian regression games are a special class of two-player general-sum Bayesian games in which the learner is partially informed about the adversary's objective through a Bayesian prior.

regression Stochastic Optimization

On Projection Robust Optimal Transport: Sample Complexity and Model Misspecification

no code implementations22 Jun 2020 Tianyi Lin, Zeyu Zheng, Elynn Y. Chen, Marco Cuturi, Michael. I. Jordan

Yet, the behavior of minimum Wasserstein estimators is poorly understood, notably in high-dimensional regimes or under model misspecification.

Projection Robust Wasserstein Distance and Riemannian Optimization

no code implementations NeurIPS 2020 Tianyi Lin, Chenyou Fan, Nhat Ho, Marco Cuturi, Michael. I. Jordan

Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance.

Riemannian optimization

Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games

no code implementations ICML 2020 Tianyi Lin, Zhengyuan Zhou, Panayotis Mertikopoulos, Michael. I. Jordan

In this paper, we consider multi-agent learning via online gradient descent in a class of games called $\lambda$-cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes unconstrained strongly monotone games.

Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm

no code implementations NeurIPS 2020 Tianyi Lin, Nhat Ho, Xi Chen, Marco Cuturi, Michael. I. Jordan

We study the fixed-support Wasserstein barycenter problem (FS-WBP), which consists in computing the Wasserstein barycenter of $m$ discrete probability measures supported on a finite metric space of size $n$.

Open-Ended Question Answering

Near-Optimal Algorithms for Minimax Optimization

no code implementations5 Feb 2020 Tianyi Lin, Chi Jin, Michael. I. Jordan

This paper presents the first algorithm with $\tilde{O}(\sqrt{\kappa_{\mathbf x}\kappa_{\mathbf y}})$ gradient complexity, matching the lower bound up to logarithmic factors.

Open-Ended Question Answering

On the Complexity of Approximating Multimarginal Optimal Transport

no code implementations30 Sep 2019 Tianyi Lin, Nhat Ho, Marco Cuturi, Michael. I. Jordan

This provides a first \textit{near-linear time} complexity bound guarantee for approximating the MOT problem and matches the best known complexity bound for the Sinkhorn algorithm in the classical OT setting when $m = 2$.

On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems

no code implementations ICML 2020 Tianyi Lin, Chi Jin, Michael. I. Jordan

We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}} f(\mathbf{x}, \mathbf{y})$, where $f$ is nonconvex in $\mathbf{x}$ but concave in $\mathbf{y}$ and $\mathcal{Y}$ is a convex and bounded set.

On the Efficiency of Entropic Regularized Algorithms for Optimal Transport

no code implementations1 Jun 2019 Tianyi Lin, Nhat Ho, Michael. I. Jordan

We prove that APDAMD achieves the complexity bound of $\widetilde{O}(n^2\sqrt{\delta}\varepsilon^{-1})$ in which $\delta>0$ stands for the regularity of $\phi$.

On Structured Filtering-Clustering: Global Error Bound and Optimal First-Order Algorithms

no code implementations16 Apr 2019 Nhat Ho, Tianyi Lin, Michael. I. Jordan

We also conduct experiments on real datasets and the numerical results demonstrate the effectiveness of our algorithms.

Clustering

On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms

no code implementations19 Jan 2019 Tianyi Lin, Nhat Ho, Michael. I. Jordan

We show that a greedy variant of the classical Sinkhorn algorithm, known as the \emph{Greenkhorn algorithm}, can be improved to $\widetilde{\mathcal{O}}(n^2\varepsilon^{-2})$, improving on the best known complexity bound of $\widetilde{\mathcal{O}}(n^2\varepsilon^{-3})$.

Data Structures and Algorithms

Sparsemax and Relaxed Wasserstein for Topic Sparsity

no code implementations22 Oct 2018 Tianyi Lin, Zhiyue Hu, Xin Guo

As topic sparsity of individual documents in online social media increases, so does the difficulty of analyzing the online text sources using traditional methods.

Improved Sample Complexity for Stochastic Compositional Variance Reduced Gradient

1 code implementation1 Jun 2018 Tianyi Lin, Chenyou Fan, Mengdi Wang, Michael. I. Jordan

Convex composition optimization is an emerging topic that covers a wide range of applications arising from stochastic optimal control, reinforcement learning and multi-stage stochastic programming.

reinforcement-learning Reinforcement Learning (RL)

An ADMM-Based Interior-Point Method for Large-Scale Linear Programming

1 code implementation31 May 2018 Tianyi Lin, Shiqian Ma, Yinyu Ye, Shuzhong Zhang

Due its connection to Newton's method, IPM is often classified as second-order method -- a genre that is attached with stability and accuracy at the expense of scalability.

Optimization and Control

Improved Oracle Complexity of Variance Reduced Methods for Nonsmooth Convex Stochastic Composition Optimization

no code implementations7 Feb 2018 Tianyi Lin, Chenyou Fan, Mengdi Wang

We consider the nonsmooth convex composition optimization problem where the objective is a composition of two finite-sum functions and analyze stochastic compositional variance reduced gradient (SCVRG) methods for them.

On the Iteration Complexity Analysis of Stochastic Primal-Dual Hybrid Gradient Approach with High Probability

no code implementations22 Jan 2018 Linbo Qiao, Tianyi Lin, Qi Qin, Xicheng Lu

In this paper, we propose a stochastic Primal-Dual Hybrid Gradient (PDHG) approach for solving a wide spectrum of regularized stochastic minimization problems, where the regularization term is composite with a linear function.

Stochastic Primal-Dual Proximal ExtraGradient Descent for Compositely Regularized Optimization

no code implementations20 Aug 2017 Tianyi Lin, Linbo Qiao, Teng Zhang, Jiashi Feng, Bofeng Zhang

This optimization model abstracts a number of important applications in artificial intelligence and machine learning, such as fused Lasso, fused logistic regression, and a class of graph-guided regularized minimization.

regression

Relaxed Wasserstein with Applications to GANs

no code implementations19 May 2017 Xin Guo, Johnny Hong, Tianyi Lin, Nan Yang

Wasserstein Generative Adversarial Networks (WGANs) provide a versatile class of models, which have attracted great attention in various applications.

Image Generation

Structured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis

no code implementations9 May 2016 Bo Jiang, Tianyi Lin, Shiqian Ma, Shuzhong Zhang

In particular, we consider in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints.

Global Convergence of Unmodified 3-Block ADMM for a Class of Convex Minimization Problems

no code implementations16 May 2015 Tianyi Lin, Shiqian Ma, Shuzhong Zhang

The alternating direction method of multipliers (ADMM) has been successfully applied to solve structured convex optimization problems due to its superior practical performance.

An Extragradient-Based Alternating Direction Method for Convex Minimization

no code implementations27 Jan 2013 Tianyi Lin, Shiqian Ma, Shuzhong Zhang

The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings.

Cannot find the paper you are looking for? You can Submit a new open access paper.