Search Results for author: Lijun Zhang

Found 77 papers, 7 papers with code

Projection-free Distributed Online Convex Optimization with $O(\sqrt{T})$ Communication Complexity

no code implementations ICML 2020 Yuanyu Wan, Wei-Wei Tu, Lijun Zhang

To deal with complicated constraints via locally light computation in distributed online learning, recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an $O(T^{3/4})$ regret bound, where $T$ is the number of prediction rounds.

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

no code implementations26 Jul 2021 Fan Lu, Guang Chen, Yinlong Liu, Lijun Zhang, Sanqing Qu, Shu Liu, Rongqi Gu

Extensive experiments are conducted on two large-scale outdoor LiDAR point cloud datasets to demonstrate the high accuracy and efficiency of the proposed HRegNet.

Point Cloud Registration

Rethinking Hard-Parameter Sharing in Multi-Task Learning

no code implementations23 Jul 2021 Lijun Zhang, Qizheng Yang, Xiao Liu, Hui Guan

(2) A multi-task model with a small proportion of task-specific parameters from bottom layers can achieve competitive performance with independent models trained on each task separately and outperform a state-of-the-art MTL framework.

Fine-Grained Image Classification Multi-Task Learning

Probabilistic Verification of Neural Networks Against Group Fairness

no code implementations18 Jul 2021 Bing Sun, Jun Sun, Ting Dai, Lijun Zhang

Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient.

Fairness

Momentum Accelerates the Convergence of Stochastic AUPRC Maximization

no code implementations2 Jul 2021 Guanghui Wang, Ming Yang, Lijun Zhang, Tianbao Yang

In this paper, we study stochastic optimization of areas under precision-recall curves (AUPRC), which is widely used for combating imbalanced classification tasks.

imbalanced classification Stochastic Optimization

Ensemble Defense with Data Diversity: Weak Correlation Implies Strong Robustness

no code implementations5 Jun 2021 Renjue Li, Hanwei Zhang, Pengfei Yang, Cheng-Chao Huang, Aimin Zhou, Bai Xue, Lijun Zhang

In this paper, we propose a framework of filter-based ensemble of deep neuralnetworks (DNNs) to defend against adversarial attacks.

A Simple yet Universal Strategy for Online Convex Optimization

no code implementations8 May 2021 Lijun Zhang, Guanghui Wang, JinFeng Yi, Tianbao Yang

In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations.

Randomized Stochastic Variance-Reduced Methods for Multi-Task Stochastic Bilevel Optimization

no code implementations5 May 2021 Zhishuai Guo, Quanqi Hu, Lijun Zhang, Tianbao Yang

Although numerous studies have proposed stochastic algorithms for solving these problems, they are limited in two perspectives: (i) their sample complexities are high, which do not match the state-of-the-art result for non-convex stochastic optimization; (ii) their algorithms are tailored to problems with only one lower-level problem.

bilevel optimization Stochastic Optimization

Invariant Subspace Approach to Boolean (Control) Networks

no code implementations18 Apr 2021 Daizhan Cheng, Lijun Zhang, Dongyao Bi

Then the invariant subspace of Boolean control network (BCN) is also proposed.

Online Strongly Convex Optimization with Unknown Delays

no code implementations21 Mar 2021 Yuanyu Wan, Wei-Wei Tu, Lijun Zhang

Specifically, we first extend the delayed variant of OGD for strongly convex functions, and establish a better regret bound of $O(d\log T)$, where $d$ is the maximum delay.

Online Convex Optimization with Continuous Switching Constraint

no code implementations21 Mar 2021 Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang

To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the \emph{overall} switching cost.

Decision Making

Projection-free Distributed Online Learning with Strongly Convex Losses

no code implementations20 Mar 2021 Yuanyu Wan, Guanghui Wang, Lijun Zhang

Specifically, we first propose a distributed projection-free algorithm for strongly convex loss functions, which enjoys a better regret bound of $O(T^{2/3}\log T)$ with smaller communication complexity of $O(T^{1/3})$.

Non-stationary Linear Bandits Revisited

no code implementations9 Mar 2021 Peng Zhao, Lijun Zhang

Existing studies develop various algorithms and show that they enjoy an $\widetilde{O}(T^{2/3}(1+P_T)^{1/3})$ dynamic regret, where $T$ is the time horizon and $P_T$ is the path-length that measures the fluctuation of the evolving unknown parameter.

Revisiting Smoothed Online Learning

no code implementations13 Feb 2021 Lijun Zhang, Wei Jiang, Shiyin Lu, Tianbao Yang

Moreover, when the hitting cost is both convex and $\lambda$-quadratic growth, we reduce the competitive ratio to $1 + \frac{2}{\sqrt{\lambda}}$ by minimizing the weighted sum of the hitting cost and the switching cost.

NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting

1 code implementation10 Feb 2021 Kai Chen, Guang Chen, Dan Xu, Lijun Zhang, Yuyao Huang, Alois Knoll

Although Transformer has made breakthrough success in widespread domains especially in Natural Language Processing (NLP), applying it to time series forecasting is still a great challenge.

Time Series Time Series Forecasting

Probabilistic Robustness Analysis for DNNs based on PAC Learning

no code implementations25 Jan 2021 Renjue Li, Pengfei Yang, Cheng-Chao Huang, Bai Xue, Lijun Zhang

We use a linear template over the input pixels, and learn the corresponding coefficients of the score difference function, based on a reduction to a linear programming (LP) problems.

SID-NISM: A Self-supervised Low-light Image Enhancement Framework

no code implementations16 Dec 2020 Lijun Zhang, Xiao Liu, Erik Learned-Miller, Hui Guan

When capturing images in low-light conditions, the images often suffer from low visibility, which not only degrades the visual aesthetics of images, but also significantly degenerates the performance of many computer vision algorithms.

Low-Light Image Enhancement

How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?

no code implementations NeurIPS 2020 Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang

This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.

Projection-free Online Learning over Strongly Convex Sets

no code implementations16 Oct 2020 Yuanyu Wan, Lijun Zhang

In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW enjoys a better regret bound of $O(T^{2/3})$ for general convex losses.

Improving Neural Network Verification through Spurious Region Guided Refinement

1 code implementation15 Oct 2020 Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, Lijun Zhang

The core idea is to make use of the obtained constraints of the abstraction to infer new bounds for the neurons.

How does Weight Correlation Affect the Generalisation Ability of Deep Neural Networks

1 code implementation12 Oct 2020 Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang

This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.

Approximate Multiplication of Sparse Matrices with Limited Space

no code implementations8 Sep 2020 Yuanyu Wan, Lijun Zhang

In this paper, we propose to reduce the time complexity by exploiting the sparsity of the input matrices.

Dynamic Regret of Convex and Smooth Functions

no code implementations NeurIPS 2020 Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.

Proving Non-Inclusion of Büchi Automata based on Monte Carlo Sampling

no code implementations5 Jul 2020 Yong Li, Andrea Turrini, Xuechao Sun, Lijun Zhang

While this is well-understood in the termination analysis of programs, this is not the case for the language inclusion analysis of B\"uchi automata, where research mainly focused on improving algorithms for proving language inclusion, with the search for counterexamples left to the expensive complementation operation.

Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

no code implementations10 Jun 2020 Peng Zhao, Lijun Zhang

In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions.

On the Power of Unambiguity in Büchi Complementation

no code implementations18 May 2020 Yong Li, Moshe Y. Vardi, Lijun Zhang

In this work, we exploit the power of \emph{unambiguity} for the complementation problem of B\"uchi automata by utilizing reduced run directed acyclic graphs (DAGs) over infinite words, in which each vertex has at most one predecessor.

Nearly Optimal Regret for Stochastic Linear Bandits with Heavy-Tailed Payoffs

no code implementations28 Apr 2020 Bo Xue, Guanghui Wang, Yimu Wang, Lijun Zhang

In this paper, we study the problem of stochastic linear bandits with finite action sets.

Minimizing Dynamic Regret and Adaptive Regret Simultaneously

no code implementations6 Feb 2020 Lijun Zhang, Shiyin Lu, Tianbao Yang

To address this limitation, new performance measures, including dynamic regret and adaptive regret have been proposed to guide the design of online algorithms.

Adaptive and Efficient Algorithms for Tracking the Best Expert

no code implementations5 Sep 2019 Shiyin Lu, Lijun Zhang

The first algorithm achieves a second-order tracking regret bound, which improves existing first-order bounds.

Stochastic Optimization for Non-convex Inf-Projection Problems

no code implementations ICML 2020 Yan Yan, Yi Xu, Lijun Zhang, Xiaoyu Wang, Tianbao Yang

In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable.

Stochastic Optimization

Bandit Convex Optimization in Non-stationary Environments

no code implementations29 Jul 2019 Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou

In this paper, we investigate BCO in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, which is defined as the difference between the cumulative loss incurred by the algorithm and that of any feasible comparator sequence.

Decision Making

Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions

no code implementations26 Jun 2019 Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Zhi-Hua Zhou

Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions.

Multi-Objective Generalized Linear Bandits

no code implementations30 May 2019 Shiyin Lu, Guanghui Wang, Yao Hu, Lijun Zhang

In this paper, we study the multi-objective bandits (MOB) problem, where a learner repeatedly selects one arm to play and then receives a reward vector consisting of multiple objectives.

Multi-Armed Bandits

Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization

no code implementations15 May 2019 Guanghui Wang, Shiyin Lu, Lijun Zhang

In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions.

SAdam: A Variant of Adam for Strongly Convex Functions

1 code implementation ICLR 2020 Guanghui Wang, Shiyin Lu, Wei-Wei Tu, Lijun Zhang

In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependant $O(\log T)$ regret bound for strongly convex functions.

Prediction with Unpredictable Feature Evolution

no code implementations27 Apr 2019 Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou

Learning with feature evolution studies the scenario where the features of the data streams can evolve, i. e., old features vanish and new features emerge.

Matrix Completion

Adaptive Regret of Convex and Smooth Functions

no code implementations26 Apr 2019 Lijun Zhang, Tie-Yan Liu, Zhi-Hua Zhou

We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure.

Stochastic Primal-Dual Algorithms with Faster Convergence than $O(1/\sqrt{T})$ for Problems without Bilinear Structure

no code implementations23 Apr 2019 Yan Yan, Yi Xu, Qihang Lin, Lijun Zhang, Tianbao Yang

The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed.

Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

no code implementations26 Feb 2019 Jianlin Li, Pengfei Yang, Jiangchao Liu, Liqian Chen, Xiaowei Huang, Lijun Zhang

Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.

Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate

no code implementations27 Jan 2019 Lijun Zhang, Zhi-Hua Zhou

Finally, we emphasize that our proof is constructive and each risk bound is equipped with an efficient stochastic algorithm attaining that bound.

A Wasserstein GAN model with the total variational regularization

no code implementations3 Dec 2018 Lijun Zhang, Yu-Jin Zhang, Yongbin Gao

It is well known that the generative adversarial nets (GANs) are remarkably difficult to train.

\ell_1-regression with Heavy-tailed Distributions

no code implementations NeurIPS 2018 Lijun Zhang, Zhi-Hua Zhou

In this paper, we consider the problem of linear regression with heavy-tailed distributions.

Adaptive Online Learning in Dynamic Environments

no code implementations NeurIPS 2018 Lijun Zhang, Shiyin Lu, Zhi-Hua Zhou

In this paper, we study online convex optimization in dynamic environments, and aim to bound the dynamic regret with respect to any sequence of comparators.

Query-Efficient Black-Box Attack by Active Learning

no code implementations13 Sep 2018 Pengcheng Li, Jin-Feng Yi, Lijun Zhang

To conduct black-box attack, a popular approach aims to train a substitute model based on the information queried from the target DNN.

Active Learning Adversarial Attack

Matrix Completion from Non-Uniformly Sampled Entries

no code implementations27 Jun 2018 Yuanyu Wan, Jin-Feng Yi, Lijun Zhang

Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.

Matrix Completion

Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions

no code implementations NeurIPS 2018 Mingrui Liu, Xiaoxuan Zhang, Lijun Zhang, Rong Jin, Tianbao Yang

Error bound conditions (EBC) are properties that characterize the growth of an objective function when a point is moved away from the optimal set.

An Image dehazing approach based on the airlight field estimation

no code implementations6 May 2018 Lijun Zhang, Yongbin Gao, Yu-Jin Zhang

This paper proposes a scheme for single image haze removal based on the airlight field (ALF) estimation.

Image Dehazing Single Image Dehazing +1

$\ell_1$-regression with Heavy-tailed Distributions

no code implementations NeurIPS 2018 Lijun Zhang, Zhi-Hua Zhou

In this paper, we consider the problem of linear regression with heavy-tailed distributions.

VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning

1 code implementation26 Feb 2018 Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao

In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).

A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer

no code implementations9 Sep 2017 Tianbao Yang, Zhe Li, Lijun Zhang

In this paper, we present a simple analysis of {\bf fast rates} with {\it high probability} of {\bf empirical minimization} for {\it stochastic composite optimization} over a finite-dimensional bounded convex set with exponential concave loss functions and an arbitrary convex regularization.

Learning with Feature Evolvable Streams

no code implementations NeurIPS 2017 Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou

To benefit from the recovered features, we develop two ensemble methods.

Scalable Demand-Aware Recommendation

no code implementations NeurIPS 2017 Jinfeng Yi, Cho-Jui Hsieh, Kush Varshney, Lijun Zhang, Yao Li

In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession.

Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds

no code implementations7 Feb 2017 Lijun Zhang, Tianbao Yang, Rong Jin

First, we establish an $\widetilde{O}(d/n + \sqrt{F_*/n})$ risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where $d$ is the dimensionality of the problem, $n$ is the number of samples, and $F_*$ is the minimal risk.

Dynamic Regret of Strongly Adaptive Methods

no code implementations ICML 2018 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently.

Efficient Non-oblivious Randomized Reduction for Risk Minimization with Improved Excess Risk Guarantee

no code implementations6 Dec 2016 Yi Xu, Haiqin Yang, Lijun Zhang, Tianbao Yang

Previously, oblivious random projection based approaches that project high dimensional features onto a random subspace have been used in practice for tackling high-dimensionality challenge in machine learning.

Improved Dynamic Regret for Non-degenerate Functions

no code implementations NeurIPS 2017 Lijun Zhang, Tianbao Yang, Jin-Feng Yi, Rong Jin, Zhi-Hua Zhou

When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length.

A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

no code implementations ICML 2017 Tianbao Yang, Qihang Lin, Lijun Zhang

In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function.

Metric Learning

Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient

no code implementations16 May 2016 Tianbao Yang, Lijun Zhang, Rong Jin, Jin-Feng Yi

Secondly, we present a lower bound with noisy gradient feedback and then show that we can achieve optimal dynamic regrets under a stochastic gradient feedback and two-point bandit feedback.

Sparse Learning for Large-scale and High-dimensional Data: A Randomized Convex-concave Optimization Approach

no code implementations12 Nov 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

In this paper, we develop a randomized algorithm and theory for learning a sparse model from large-scale and high-dimensional data, which is usually formulated as an empirical risk minimization problem with a sparsity-inducing regularizer.

Sparse Learning

Stochastic Proximal Gradient Descent for Nuclear Norm Regularization

no code implementations5 Nov 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

In this paper, we utilize stochastic optimization to reduce the space complexity of convex composite optimization with a nuclear norm regularizer, where the variable is a matrix of size $m \times n$.

Stochastic Optimization

Online Stochastic Linear Optimization under One-bit Feedback

no code implementations25 Sep 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round.

Towards Making High Dimensional Distance Metric Learning Practical

no code implementations15 Sep 2015 Qi Qian, Rong Jin, Lijun Zhang, Shenghuo Zhu

In this work, we present a dual random projection frame for DML with high dimensional data that explicitly addresses the limitation of dimensionality reduction for DML.

Dimensionality Reduction Metric Learning

Fast Sparse Least-Squares Regression with Non-Asymptotic Guarantees

no code implementations18 Jul 2015 Tianbao Yang, Lijun Zhang, Qihang Lin, Rong Jin

In this paper, we study a fast approximation method for {\it large-scale high-dimensional} sparse least-squares regression problem by exploiting the Johnson-Lindenstrauss (JL) transforms, which embed a set of high-dimensional vectors into a low-dimensional space.

Analysis of Nuclear Norm Regularization for Full-rank Matrix Completion

no code implementations26 Apr 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

To the best of our knowledge, this is first time such a relative bound is proved for the regularized formulation of matrix completion.

Low-Rank Matrix Completion

Theory of Dual-sparse Regularized Randomized Reduction

no code implementations15 Apr 2015 Tianbao Yang, Lijun Zhang, Rong Jin, Shenghuo Zhu

In this paper, we study randomized reduction methods, which reduce high-dimensional features into low-dimensional space by randomized methods (e. g., random projection, random hashing), for large-scale high-dimensional classification.

General Classification

Binary Excess Risk for Smooth Convex Surrogates

no code implementations7 Feb 2014 Mehrdad Mahdavi, Lijun Zhang, Rong Jin

In statistical learning theory, convex surrogates of the 0-1 loss are highly preferred because of the computational and theoretical virtues that convexity brings in.

Learning Theory

Mixed Optimization for Smooth Functions

no code implementations NeurIPS 2013 Mehrdad Mahdavi, Lijun Zhang, Rong Jin

It is well known that the optimal convergence rate for stochastic optimization of smooth functions is $[O(1/\sqrt{T})]$, which is same as stochastic optimization of Lipschitz continuous convex functions.

Stochastic Optimization

Linear Convergence with Condition Number Independent Access of Full Gradients

no code implementations NeurIPS 2013 Lijun Zhang, Mehrdad Mahdavi, Rong Jin

For smooth and strongly convex optimization, the optimal iteration complexity of the gradient-based algorithm is $O(\sqrt{\kappa}\log 1/\epsilon)$, where $\kappa$ is the conditional number.

Beating the Minimax Rate of Active Learning with Prior Knowledge

no code implementations19 Nov 2013 Lijun Zhang, Mehrdad Mahdavi, Rong Jin

Under the assumption that the norm of the optimal classifier that minimizes the convex risk is available, our analysis shows that the introduction of the convex surrogate loss yields an exponential reduction in the label complexity even when the parameter $\kappa$ of the Tsybakov noise is larger than $1$.

Active Learning

Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections

no code implementations19 Apr 2013 Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang

We consider stochastic strongly convex optimization with a complex inequality constraint.

Efficient Distance Metric Learning by Adaptive Sampling and Mini-Batch Stochastic Gradient Descent (SGD)

no code implementations3 Apr 2013 Qi Qian, Rong Jin, Jin-Feng Yi, Lijun Zhang, Shenghuo Zhu

Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive because in order to ensure that the solution is a PSD matrix, it has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation.

Metric Learning

O(logT) Projections for Stochastic Optimization of Smooth and Strongly Convex Functions

no code implementations2 Apr 2013 Lijun Zhang, Tianbao Yang, Rong Jin, Xiaofei He

Traditional algorithms for stochastic optimization require projecting the solution at each iteration into a given domain to ensure its feasibility.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.