no code implementations • ICML 2020 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
To deal with complicated constraints via locally light computation in distributed online learning, recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an $O(T^{3/4})$ regret bound, where $T$ is the number of prediction rounds.
no code implementations • 26 Jan 2023 • Dongyao Bi, Lijun Zhang, Kuize Zhang
With the help of equitable partitions of an STG, we study the structural properties of the smallest invariant dual subspaces containing a number of Boolean functions.
no code implementations • 23 Nov 2022 • Renjue Li, Tianhang Qin, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Lijun Zhang
The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee.
no code implementations • 8 Oct 2022 • Xiao Liu, Lijun Zhang, Hui Guan
Homophily and heterophily are intrinsic properties of graphs that describe whether two linked nodes share similar properties.
Ranked #3 on
Node Classification
on arXiv-year
no code implementations • 21 Sep 2022 • Shuting Kang, Heng Guo, Lijun Zhang, Guangzhen Liu, Yunzhi Xue, Yanjun Wu
How to model action sequences so that one can further consider the effects of different action parameters in the scenario is the bottleneck of the problem.
no code implementations • 18 Jul 2022 • Wei Jiang, Gang Li, Yibo Wang, Lijun Zhang, Tianbao Yang
The key issue is to track and estimate a sequence of $\mathbf g(\mathbf{w})=(g_1(\mathbf{w}), \ldots, g_m(\mathbf{w}))$ across iterations, where $\mathbf g(\mathbf{w})$ has $m$ blocks and it is only allowed to probe $\mathcal{O}(1)$ blocks to attain their stochastic values and Jacobians.
no code implementations • 2 May 2022 • Lijun Zhang, Wei Jiang, JinFeng Yi, Tianbao Yang
In this paper, we investigate an online prediction strategy named as Discounted-Normal-Predictor (Kapralov and Panigrahy, 2010) for smoothed online convex optimization (SOCO), in which the learner needs to minimize not only the hitting cost but also the switching cost.
no code implementations • 11 Apr 2022 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
The online Frank-Wolfe (OFW) method has gained much popularity for online convex optimization due to its projection-free property.
1 code implementation • 10 Mar 2022 • Lijun Zhang, Xiao Liu, Hui Guan
Tree-structured multi-task architectures have been employed to jointly tackle multiple vision tasks in the context of multi-task learning (MTL).
1 code implementation • 24 Feb 2022 • Zi-Hao Qiu, Quanqi Hu, Yongjian Zhong, Lijun Zhang, Tianbao Yang
To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee.
1 code implementation • 24 Feb 2022 • Zhuoning Yuan, Yuexin Wu, Zi-Hao Qiu, Xianzhi Du, Lijun Zhang, Denny Zhou, Tianbao Yang
In this paper, we study contrastive learning from an optimization perspective, aiming to analyze and address a fundamental issue of existing contrastive learning methods that either rely on a large batch size or a large dictionary of feature vectors.
no code implementations • 15 Feb 2022 • Wei Jiang, Bokun Wang, Yibo Wang, Lijun Zhang, Tianbao Yang
To address these limitations, we propose a Stochastic Multi-level Variance Reduction method (SMVR), which achieves the optimal sample complexity of $\mathcal{O}\left(1 / \epsilon^{3}\right)$ to find an $\epsilon$-stationary point for non-convex objectives.
no code implementations • 23 Jan 2022 • Gaojie Jin, Xinping Yi, Pengfei Yang, Lijun Zhang, Sven Schewe, Xiaowei Huang
While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.
no code implementations • 29 Dec 2021 • Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
We investigate online convex optimization in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.
no code implementations • 19 Nov 2021 • Yuezhou Sun, Wenlong Zhao, Lijun Zhang, Xiao Liu, Hui Guan, Matei Zaharia
This paper investigates deep neural network (DNN) compression from the perspective of compactly representing and storing trained parameters.
1 code implementation • 25 Oct 2021 • Lijun Zhang, Xiao Liu, Hui Guan
The first challenge is to determine what parameters to share across tasks to optimize for both memory efficiency and task accuracy.
1 code implementation • ICCV 2021 • Fan Lu, Guang Chen, Yinlong Liu, Lijun Zhang, Sanqing Qu, Shu Liu, Rongqi Gu
Extensive experiments are conducted on two large-scale outdoor LiDAR point cloud datasets to demonstrate the high accuracy and efficiency of the proposed HRegNet.
no code implementations • 23 Jul 2021 • Lijun Zhang, Qizheng Yang, Xiao Liu, Hui Guan
One common sharing practice is to share the bottom layers of a deep neural network among domains while using separate top layers for each domain.
no code implementations • 18 Jul 2021 • Bing Sun, Jun Sun, Ting Dai, Lijun Zhang
Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient.
no code implementations • 2 Jul 2021 • Guanghui Wang, Ming Yang, Lijun Zhang, Tianbao Yang
In this paper, we further improve the stochastic optimization of AURPC by (i) developing novel stochastic momentum methods with a better iteration complexity of $O(1/\epsilon^4)$ for finding an $\epsilon$-stationary solution; and (ii) designing a novel family of stochastic adaptive methods with the same iteration complexity, which enjoy faster convergence in practice.
no code implementations • 5 Jun 2021 • Renjue Li, Hanwei Zhang, Pengfei Yang, Cheng-Chao Huang, Aimin Zhou, Bai Xue, Lijun Zhang
In this paper, we propose a framework of filter-based ensemble of deep neuralnetworks (DNNs) to defend against adversarial attacks.
no code implementations • 8 May 2021 • Lijun Zhang, Guanghui Wang, JinFeng Yi, Tianbao Yang
In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations.
no code implementations • 5 May 2021 • Zhishuai Guo, Quanqi Hu, Lijun Zhang, Tianbao Yang
Although numerous studies have proposed stochastic algorithms for solving these problems, they are limited in two perspectives: (i) their sample complexities are high, which do not match the state-of-the-art result for non-convex stochastic optimization; (ii) their algorithms are tailored to problems with only one lower-level problem.
no code implementations • 18 Apr 2021 • Daizhan Cheng, Lijun Zhang, Dongyao Bi
Then the invariant subspace of Boolean control network (BCN) is also proposed.
2 code implementations • 7 Apr 2021 • Sanqing Qu, Guang Chen, Zhijun Li, Lijun Zhang, Fan Lu, Alois Knoll
Traditional methods mainly focus on foreground and background frames separation with only a single attention branch and class activation sequence.
Weakly Supervised Action Localization
Weakly-supervised Temporal Action Localization
+1
no code implementations • NeurIPS 2021 • Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang
To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the \emph{overall} switching cost.
no code implementations • 21 Mar 2021 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
Specifically, we first extend the delayed variant of OGD for strongly convex functions, and establish a better regret bound of $O(d\log T)$, where $d$ is the maximum delay.
no code implementations • 20 Mar 2021 • Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang
In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same $O(T^{3/4})$ regret bound with only $O(\sqrt{T})$ communication rounds for convex losses, and a better regret bound of $O(T^{2/3}(\log T)^{1/3})$ with fewer $O(T^{1/3}(\log T)^{2/3})$ communication rounds for strongly convex losses.
no code implementations • 9 Mar 2021 • Peng Zhao, Lijun Zhang
Existing studies develop various algorithms and show that they enjoy an $\widetilde{O}(T^{2/3}(1+P_T)^{1/3})$ dynamic regret, where $T$ is the time horizon and $P_T$ is the path-length that measures the fluctuation of the evolving unknown parameter.
no code implementations • NeurIPS 2021 • Lijun Zhang, Wei Jiang, Shiyin Lu, Tianbao Yang
Moreover, when the hitting cost is both convex and $\lambda$-quadratic growth, we reduce the competitive ratio to $1 + \frac{2}{\sqrt{\lambda}}$ by minimizing the weighted sum of the hitting cost and the switching cost.
1 code implementation • 10 Feb 2021 • Kai Chen, Guang Chen, Dan Xu, Lijun Zhang, Yuyao Huang, Alois Knoll
Although Transformer has made breakthrough success in widespread domains especially in Natural Language Processing (NLP), applying it to time series forecasting is still a great challenge.
1 code implementation • 25 Jan 2021 • Renjue Li, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Bai Xue, Lijun Zhang
It is shown that DeepPAC outperforms the state-of-the-art statistical method PROVERO, and it achieves more practical robustness analysis than the formal verification tool ERAN.
no code implementations • 16 Dec 2020 • Lijun Zhang, Xiao Liu, Erik Learned-Miller, Hui Guan
When capturing images in low-light conditions, the images often suffer from low visibility, which not only degrades the visual aesthetics of images, but also significantly degenerates the performance of many computer vision algorithms.
no code implementations • 7 Dec 2020 • Zhaoqiang Chen, Qun Chen, Youcef Nafa, Tianyi Duan, Wei Pan, Lijun Zhang, Zhanhuai Li
Built on the recent advances on risk analysis for ER, the proposed approach first trains a deep model on labeled training data, and then fine-tunes it by minimizing its estimated misprediction risk on unlabeled target data.
no code implementations • NeurIPS 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
no code implementations • 16 Oct 2020 • Yuanyu Wan, Lijun Zhang
In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW enjoys a better regret bound of $O(T^{2/3})$ for general convex losses.
1 code implementation • 15 Oct 2020 • Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, Lijun Zhang
The core idea is to make use of the obtained constraints of the abstraction to infer new bounds for the neurons.
1 code implementation • 12 Oct 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
no code implementations • 8 Sep 2020 • Yuanyu Wan, Lijun Zhang
In this paper, we propose to reduce the time complexity by exploiting the sparsity of the input matrices.
no code implementations • NeurIPS 2020 • Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.
no code implementations • 5 Jul 2020 • Yong Li, Andrea Turrini, Xuechao Sun, Lijun Zhang
While this is well-understood in the termination analysis of programs, this is not the case for the language inclusion analysis of B\"uchi automata, where research mainly focused on improving algorithms for proving language inclusion, with the search for counterexamples left to the expensive complementation operation.
no code implementations • 10 Jun 2020 • Peng Zhao, Lijun Zhang
In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions.
no code implementations • 18 May 2020 • Yong Li, Moshe Y. Vardi, Lijun Zhang
In this work, we exploit the power of \emph{unambiguity} for the complementation problem of B\"uchi automata by utilizing reduced run directed acyclic graphs (DAGs) over infinite words, in which each vertex has at most one predecessor.
no code implementations • 28 Apr 2020 • Bo Xue, Guanghui Wang, Yimu Wang, Lijun Zhang
In this paper, we study the problem of stochastic linear bandits with finite action sets.
no code implementations • 6 Feb 2020 • Lijun Zhang, Shiyin Lu, Tianbao Yang
To address this limitation, new performance measures, including dynamic regret and adaptive regret have been proposed to guide the design of online algorithms.
1 code implementation • 15 Nov 2019 • Lijun Zhang, Srinath Nizampatnam, Ahana Gangopadhyay, Marcos V. Conde
The model performance is further improved by constructing multiple sets of attention networks.
no code implementations • 5 Sep 2019 • Shiyin Lu, Lijun Zhang
The first algorithm achieves a second-order tracking regret bound, which improves existing first-order bounds.
no code implementations • ICML 2020 • Yan Yan, Yi Xu, Lijun Zhang, Xiaoyu Wang, Tianbao Yang
In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable.
no code implementations • 29 Jul 2019 • Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou
In this paper, we investigate BCO in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, which is defined as the difference between the cumulative loss incurred by the algorithm and that of any feasible comparator sequence.
no code implementations • NeurIPS 2021 • Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Zhi-Hua Zhou
Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions.
no code implementations • 30 May 2019 • Shiyin Lu, Guanghui Wang, Yao Hu, Lijun Zhang
In this paper, we study the multi-objective bandits (MOB) problem, where a learner repeatedly selects one arm to play and then receives a reward vector consisting of multiple objectives.
no code implementations • 28 May 2019 • Pengcheng Li, Jin-Feng Yi, Bo-Wen Zhou, Lijun Zhang
In this paper, we improve the robustness of DNNs by utilizing techniques of Distance Metric Learning.
no code implementations • 15 May 2019 • Guanghui Wang, Shiyin Lu, Lijun Zhang
In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions.
1 code implementation • ICLR 2020 • Guanghui Wang, Shiyin Lu, Wei-Wei Tu, Lijun Zhang
In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependant $O(\log T)$ regret bound for strongly convex functions.
no code implementations • 27 Apr 2019 • Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou
Learning with feature evolution studies the scenario where the features of the data streams can evolve, i. e., old features vanish and new features emerge.
no code implementations • 26 Apr 2019 • Lijun Zhang, Tie-Yan Liu, Zhi-Hua Zhou
We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure.
no code implementations • 23 Apr 2019 • Yan Yan, Yi Xu, Qihang Lin, Lijun Zhang, Tianbao Yang
The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed.
no code implementations • 26 Feb 2019 • Jianlin Li, Pengfei Yang, Jiangchao Liu, Liqian Chen, Xiaowei Huang, Lijun Zhang
Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.
no code implementations • 27 Jan 2019 • Lijun Zhang, Zhi-Hua Zhou
Finally, we emphasize that our proof is constructive and each risk bound is equipped with an efficient stochastic algorithm attaining that bound.
no code implementations • 3 Dec 2018 • Lijun Zhang, Yu-Jin Zhang, Yongbin Gao
It is well known that the generative adversarial nets (GANs) are remarkably difficult to train.
no code implementations • NeurIPS 2018 • Lijun Zhang, Zhi-Hua Zhou
In this paper, we consider the problem of linear regression with heavy-tailed distributions.
no code implementations • NeurIPS 2018 • Lijun Zhang, Shiyin Lu, Zhi-Hua Zhou
In this paper, we study online convex optimization in dynamic environments, and aim to bound the dynamic regret with respect to any sequence of comparators.
no code implementations • 13 Sep 2018 • Pengcheng Li, Jin-Feng Yi, Lijun Zhang
To conduct black-box attack, a popular approach aims to train a substitute model based on the information queried from the target DNN.
no code implementations • 27 Jun 2018 • Yuanyu Wan, Jin-Feng Yi, Lijun Zhang
Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.
no code implementations • NeurIPS 2018 • Mingrui Liu, Xiaoxuan Zhang, Lijun Zhang, Rong Jin, Tianbao Yang
Error bound conditions (EBC) are properties that characterize the growth of an objective function when a point is moved away from the optimal set.
no code implementations • 6 May 2018 • Lijun Zhang, Yongbin Gao, Yu-Jin Zhang
This paper proposes a scheme for single image haze removal based on the airlight field (ALF) estimation.
no code implementations • NeurIPS 2018 • Lijun Zhang, Zhi-Hua Zhou
In this paper, we consider the problem of linear regression with heavy-tailed distributions.
1 code implementation • 26 Feb 2018 • Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao
In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).
no code implementations • 9 Sep 2017 • Tianbao Yang, Zhe Li, Lijun Zhang
In this paper, we present a simple analysis of {\bf fast rates} with {\it high probability} of {\bf empirical minimization} for {\it stochastic composite optimization} over a finite-dimensional bounded convex set with exponential concave loss functions and an arbitrary convex regularization.
no code implementations • NeurIPS 2017 • Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou
To benefit from the recovered features, we develop two ensemble methods.
no code implementations • NeurIPS 2017 • Jinfeng Yi, Cho-Jui Hsieh, Kush Varshney, Lijun Zhang, Yao Li
In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession.
no code implementations • 7 Feb 2017 • Lijun Zhang, Tianbao Yang, Rong Jin
First, we establish an $\widetilde{O}(d/n + \sqrt{F_*/n})$ risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where $d$ is the dimensionality of the problem, $n$ is the number of samples, and $F_*$ is the minimal risk.
no code implementations • ICML 2018 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently.
no code implementations • 6 Dec 2016 • Yi Xu, Haiqin Yang, Lijun Zhang, Tianbao Yang
Previously, oblivious random projection based approaches that project high dimensional features onto a random subspace have been used in practice for tackling high-dimensionality challenge in machine learning.
no code implementations • NeurIPS 2017 • Lijun Zhang, Tianbao Yang, Jin-Feng Yi, Rong Jin, Zhi-Hua Zhou
When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length.
no code implementations • ICML 2017 • Tianbao Yang, Qihang Lin, Lijun Zhang
In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function.
no code implementations • 16 May 2016 • Tianbao Yang, Lijun Zhang, Rong Jin, Jin-Feng Yi
Secondly, we present a lower bound with noisy gradient feedback and then show that we can achieve optimal dynamic regrets under a stochastic gradient feedback and two-point bandit feedback.
no code implementations • 12 Nov 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
In this paper, we develop a randomized algorithm and theory for learning a sparse model from large-scale and high-dimensional data, which is usually formulated as an empirical risk minimization problem with a sparsity-inducing regularizer.
no code implementations • 5 Nov 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
In this paper, we utilize stochastic optimization to reduce the space complexity of convex composite optimization with a nuclear norm regularizer, where the variable is a matrix of size $m \times n$.
no code implementations • 25 Sep 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round.
no code implementations • 15 Sep 2015 • Qi Qian, Rong Jin, Lijun Zhang, Shenghuo Zhu
In this work, we present a dual random projection frame for DML with high dimensional data that explicitly addresses the limitation of dimensionality reduction for DML.
no code implementations • 18 Jul 2015 • Tianbao Yang, Lijun Zhang, Qihang Lin, Rong Jin
In this paper, we study a fast approximation method for {\it large-scale high-dimensional} sparse least-squares regression problem by exploiting the Johnson-Lindenstrauss (JL) transforms, which embed a set of high-dimensional vectors into a low-dimensional space.
no code implementations • 4 May 2015 • Tianbao Yang, Lijun Zhang, Rong Jin, Shenghuo Zhu
In this paper, we consider the problem of column subset selection.
no code implementations • 26 Apr 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
To the best of our knowledge, this is first time such a relative bound is proved for the regularized formulation of matrix completion.
no code implementations • 15 Apr 2015 • Tianbao Yang, Lijun Zhang, Rong Jin, Shenghuo Zhu
In this paper, we study randomized reduction methods, which reduce high-dimensional features into low-dimensional space by randomized methods (e. g., random projection, random hashing), for large-scale high-dimensional classification.
no code implementations • 7 Feb 2014 • Mehrdad Mahdavi, Lijun Zhang, Rong Jin
In statistical learning theory, convex surrogates of the 0-1 loss are highly preferred because of the computational and theoretical virtues that convexity brings in.
no code implementations • NeurIPS 2013 • Mehrdad Mahdavi, Lijun Zhang, Rong Jin
It is well known that the optimal convergence rate for stochastic optimization of smooth functions is $[O(1/\sqrt{T})]$, which is same as stochastic optimization of Lipschitz continuous convex functions.
no code implementations • NeurIPS 2013 • Lijun Zhang, Mehrdad Mahdavi, Rong Jin
For smooth and strongly convex optimization, the optimal iteration complexity of the gradient-based algorithm is $O(\sqrt{\kappa}\log 1/\epsilon)$, where $\kappa$ is the conditional number.
no code implementations • 19 Nov 2013 • Lijun Zhang, Mehrdad Mahdavi, Rong Jin
Under the assumption that the norm of the optimal classifier that minimizes the convex risk is available, our analysis shows that the introduction of the convex surrogate loss yields an exponential reduction in the label complexity even when the parameter $\kappa$ of the Tsybakov noise is larger than $1$.
no code implementations • 19 Apr 2013 • Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang
We consider stochastic strongly convex optimization with a complex inequality constraint.
no code implementations • 3 Apr 2013 • Qi Qian, Rong Jin, Jin-Feng Yi, Lijun Zhang, Shenghuo Zhu
Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive because in order to ensure that the solution is a PSD matrix, it has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation.
no code implementations • 2 Apr 2013 • Lijun Zhang, Tianbao Yang, Rong Jin, Xiaofei He
Traditional algorithms for stochastic optimization require projecting the solution at each iteration into a given domain to ensure its feasibility.
no code implementations • 13 Nov 2012 • Lijun Zhang, Mehrdad Mahdavi, Rong Jin, Tianbao Yang, Shenghuo Zhu
Random projection has been widely used in data classification.