Search Results for author: Qihang Lin

Found 41 papers, 4 papers with code

Transparency Promotion with Model-Agnostic Linear Competitors

no code implementations ICML 2020 Hassan Rafique, Tong Wang, Qihang Lin, Arshia Singhani

We propose a novel type of hybrid model for multi-class classification, which utilizes competing linear models to collaborate with an existing black-box model, promoting transparency in the decision-making process.

Decision Making Multi-class Classification

First-order Methods for Affinely Constrained Composite Non-convex Non-smooth Problems: Lower Complexity Bound and Near-optimal Methods

no code implementations14 Jul 2023 Wei Liu, Qihang Lin, Yangyang Xu

In this paper, we make the first attempt to establish lower complexity bounds of FOMs for solving a class of composite non-convex non-smooth optimization with linear constraints.

Stochastic Methods for AUC Optimization subject to AUC-based Fairness Constraints

no code implementations23 Dec 2022 Yao Yao, Qihang Lin, Tianbao Yang

In this work, we formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.

Fairness

ProtoX: Explaining a Reinforcement Learning Agent via Prototyping

2 code implementations6 Nov 2022 Ronilo J. Ragodos, Tong Wang, Qihang Lin, Xun Zhou

To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent.

Contrastive Learning Imitation Learning +3

Federated Learning on Adaptively Weighted Nodes by Bilevel Optimization

no code implementations21 Jul 2022 Yankun Huang, Qihang Lin, Nick Street, Stephen Baek

We propose a federated learning method with weighted nodes in which the weights can be modified to optimize the model's performance on a separate validation set.

Bilevel Optimization Federated Learning

Large-scale Optimization of Partial AUC in a Range of False Positive Rates

no code implementations3 Mar 2022 Yao Yao, Qihang Lin, Tianbao Yang

The partial AUC, as a generalization of the AUC, summarizes only the TPRs over a specific range of the FPRs and is thus a more suitable performance measure in many real-world situations.

Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization

no code implementations NeurIPS 2020 Yan Yan, Yi Xu, Qihang Lin, Wei Liu, Tianbao Yang

In this paper, we bridge this gap by providing a sharp analysis of epoch-wise stochastic gradient descent ascent method (referred to as Epoch-GDA) for solving strongly convex strongly concave (SCSC) min-max problems, without imposing any additional assumption about smoothness or the function's structure.

LEMMA

Self-guided Approximate Linear Programs

1 code implementation9 Jan 2020 Parshan Pakiman, Selvaprabu Nadarajah, Negar Soheili, Qihang Lin

Approximate linear programs (ALPs) are well-known models based on value function approximations (VFAs) to obtain policies and lower bounds on the optimal policy cost of discounted-cost Markov decision processes (MDPs).

Model-Agnostic Linear Competitors -- When Interpretable Models Compete and Collaborate with Black-Box Models

no code implementations23 Sep 2019 Hassan Rafique, Tong Wang, Qihang Lin

Driven by an increasing need for model interpretability, interpretable models have become strong competitors for black-box models in many real applications.

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints

no code implementations7 Aug 2019 Qihang Lin, Selvaprabu Nadarajah, Negar Soheili, Tianbao Yang

We design a stochastic feasible level set method (SFLS) for SOECs that has low data complexity and emphasizes feasibility before convergence.

Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model

no code implementations10 May 2019 Tong Wang, Qihang Lin

The interpretable model substitutes the black-box model on a subset of data where the black-box is overkill or nearly overkill, gaining transparency at no or low cost of the predictive accuracy.

Interpretable Machine Learning

Stochastic Primal-Dual Algorithms with Faster Convergence than $O(1/\sqrt{T})$ for Problems without Bilinear Structure

no code implementations23 Apr 2019 Yan Yan, Yi Xu, Qihang Lin, Lijun Zhang, Tianbao Yang

The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed.

Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence

no code implementations28 Nov 2018 Yi Xu, Qi Qi, Qihang Lin, Rong Jin, Tianbao Yang

In this paper, we propose new stochastic optimization algorithms and study their first-order convergence theories for solving a broad family of DC functions.

Stochastic Optimization

First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems

no code implementations24 Oct 2018 Mingrui Liu, Hassan Rafique, Qihang Lin, Tianbao Yang

In this paper, we consider first-order convergence theory and algorithms for solving a class of non-convex non-concave min-max saddle-point problems, whose objective function is weakly convex in the variables of minimization and weakly concave in the variables of maximization.

Weakly-Convex Concave Min-Max Optimization: Provable Algorithms and Applications in Machine Learning

no code implementations4 Oct 2018 Hassan Rafique, Mingrui Liu, Qihang Lin, Tianbao Yang

Min-max problems have broad applications in machine learning, including learning with non-decomposable loss and learning with robustness to data distribution.

BIG-bench Machine Learning

A Unified Analysis of Stochastic Momentum Methods for Deep Learning

no code implementations30 Aug 2018 Yan Yan, Tianbao Yang, Zhe Li, Qihang Lin, Yi Yang

However, their theoretical analysis of convergence of the training objective and the generalization error for prediction is still under-explored.

Level-Set Methods for Finite-Sum Constrained Convex Optimization

no code implementations ICML 2018 Qihang Lin, Runchao Ma, Tianbao Yang

To update the level parameter towards the optimality, both methods require an oracle that generates upper and lower bounds as well as an affine-minorant of the level function.

Prophit: Causal inverse classification for multiple continuously valued treatment policies

no code implementations14 Feb 2018 Michael T. Lash, Qihang Lin, W. Nick Street

Inverse classification uses an induced classifier as a queryable oracle to guide test instances towards a preferred posterior class label.

Classification Gaussian Processes +1

Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter

no code implementations NeurIPS 2017 Yi Xu, Qihang Lin, Tianbao Yang

The most studied error bound is the quadratic error bound, which generalizes strong convexity and is satisfied by a large family of machine learning problems.

BIG-bench Machine Learning Stochastic Optimization

ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization

no code implementations NeurIPS 2017 Yi Xu, Mingrui Liu, Qihang Lin, Tianbao Yang

The novelty of the proposed scheme lies at that it is adaptive to a local sharpness property of the objective function, which marks the key difference from previous adaptive scheme that adjusts the penalty parameter per-iteration based on certain conditions on iterates.

Stochastic Optimization

Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

no code implementations ICML 2017 Yi Xu, Qihang Lin, Tianbao Yang

In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions.

Stochastic Optimization

Block-Normalized Gradient Method: An Empirical Study for Training Deep Neural Network

2 code implementations ICLR 2018 Adams Wei Yu, Lei Huang, Qihang Lin, Ruslan Salakhutdinov, Jaime Carbonell

In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization.

Bayesian Decision Process for Cost-Efficient Dynamic Ranking via Crowdsourcing

no code implementations21 Dec 2016 Xi Chen, Kevin Jiao, Qihang Lin

Rank aggregation based on pairwise comparisons over a set of items has a wide range of applications.

Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/\epsilon)

no code implementations NeurIPS 2016 Yi Xu, Yan Yan, Qihang Lin, Tianbao Yang

To the best of our knowledge, this is the lowest iteration complexity achieved so far for the considered non-smooth optimization problems without strong convexity assumption.

Generalized Inverse Classification

no code implementations5 Oct 2016 Michael T. Lash, Qihang Lin, W. Nick Street, Jennifer G. Robinson, Jeffrey Ohlmann

To solve such a problem, we propose three real-valued heuristic-based methods and two sensitivity analysis-based comparison methods, each of which is evaluated on two freely available real-world datasets.

Classification General Classification

A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

no code implementations ICML 2017 Tianbao Yang, Qihang Lin, Lijun Zhang

In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function.

Metric Learning

Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than $O(1/ε)$

no code implementations NeurIPS 2016 Yi Xu, Yan Yan, Qihang Lin, Tianbao Yang

In this work, we will show that the proposed HOPS achieved a lower iteration complexity of $\widetilde O(1/\epsilon^{1-\theta})$\footnote{$\widetilde O()$ suppresses a logarithmic factor.}

Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition

no code implementations4 Jul 2016 Yi Xu, Qihang Lin, Tianbao Yang

In particular, if the objective function $F(\mathbf w)$ in the $\epsilon$-sublevel set grows as fast as $\|\mathbf w - \mathbf w_*\|_2^{1/\theta}$, where $\mathbf w_*$ represents the closest optimal solution to $\mathbf w$ and $\theta\in(0, 1]$ quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an $\epsilon$-optimal solution can be $\widetilde O(1/\epsilon^{2(1-\theta)})$, which is optimal at most up to a logarithmic factor.

Stochastic Optimization

Unified Convergence Analysis of Stochastic Momentum Methods for Convex and Non-convex Optimization

no code implementations12 Apr 2016 Tianbao Yang, Qihang Lin, Zhe Li

This paper fills the gap between practice and theory by developing a basic convergence analysis of two stochastic momentum methods, namely stochastic heavy-ball method and the stochastic variant of Nesterov's accelerated gradient method.

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

no code implementations9 Dec 2015 Tianbao Yang, Qihang Lin

We show that, when applied to a broad class of convex optimization problems, RSG method can find an $\epsilon$-optimal solution with a low complexity than SG method.

Stochastic subGradient Methods with Linear Convergence for Polyhedral Convex Optimization

no code implementations6 Oct 2015 Tianbao Yang, Qihang Lin

In this paper, we show that simple {Stochastic} subGradient Decent methods with multiple Restarting, named {\bf RSGD}, can achieve a \textit{linear convergence rate} for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron, to which we refer as {\bf polyhedral convex optimization}.

BIG-bench Machine Learning

Doubly Stochastic Primal-Dual Coordinate Method for Bilinear Saddle-Point Problem

no code implementations14 Aug 2015 Adams Wei Yu, Qihang Lin, Tianbao Yang

We propose a doubly stochastic primal-dual coordinate optimization algorithm for empirical risk minimization, which can be formulated as a bilinear saddle-point problem.

Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity

no code implementations27 Jul 2015 Jason D. Lee, Qihang Lin, Tengyu Ma, Tianbao Yang

We also prove a lower bound for the number of rounds of communication for a broad class of distributed first-order methods including the proposed algorithms in this paper.

Distributed Optimization

Fast Sparse Least-Squares Regression with Non-Asymptotic Guarantees

no code implementations18 Jul 2015 Tianbao Yang, Lijun Zhang, Qihang Lin, Rong Jin

In this paper, we study a fast approximation method for {\it large-scale high-dimensional} sparse least-squares regression problem by exploiting the Johnson-Lindenstrauss (JL) transforms, which embed a set of high-dimensional vectors into a low-dimensional space.

regression

An Accelerated Proximal Coordinate Gradient Method

no code implementations NeurIPS 2014 Qihang Lin, Zhaosong Lu, Lin Xiao

We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems.

On Data Preconditioning for Regularized Loss Minimization

no code implementations13 Aug 2014 Tianbao Yang, Rong Jin, Shenghuo Zhu, Qihang Lin

In this work, we study data preconditioning, a well-known and long-existing technique, for boosting the convergence of first-order methods for regularized loss minimization.

Statistical Decision Making for Optimal Budget Allocation in Crowd Labeling

no code implementations12 Mar 2014 Xi Chen, Qihang Lin, Dengyong Zhou

In crowd labeling, a large amount of unlabeled data instances are outsourced to a crowd of workers.

Decision Making

Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections

no code implementations19 Apr 2013 Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang

We consider stochastic strongly convex optimization with a complex inequality constraint.

Optimal Regularized Dual Averaging Methods for Stochastic Optimization

no code implementations NeurIPS 2012 Xi Chen, Qihang Lin, Javier Pena

We develop a novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.