no code implementations • 28 Aug 2023 • Myung Cho, Lifeng Lai, Weiyu Xu
In this paper, we investigate the impact of imbalanced data on the convergence of distributed dual coordinate ascent in a tree network for solving an empirical loss minimization problem in distributed machine learning.
no code implementations • 3 Aug 2023 • Puning Zhao, Lifeng Lai
A modification of the original $Q$ learning method was proposed in (Shah and Xie, 2018), which estimates $Q$ values with nearest neighbors.
no code implementations • 15 Jul 2023 • Guanlin Liu, Zhihan Zhou, Han Liu, Lifeng Lai
Robust reinforcement learning (RL) aims to find a policy that optimizes the worst-case performance in the face of uncertainties.
no code implementations • 4 Nov 2022 • Yulu Jin, Lifeng Lai
In this paper, we take a first step towards answering the question of how to design fair machine learning algorithms that are robust to adversarial attacks.
no code implementations • 8 Feb 2022 • Minhui Huang, Xuxing Chen, Kaiyi Ji, Shiqian Ma, Lifeng Lai
Moreover, we propose an inexact NEgative-curvature-Originated-from-Noise Algorithm (iNEON), a pure first-order algorithm that can escape saddle point and find local minimum of stochastic bilevel optimization.
no code implementations • 10 Dec 2021 • Guanlin Liu, Lifeng Lai
We show that, in both white-box and black-box settings, the proposed attack schemes can force the LinUCB agent to pull a target arm very frequently by spending only logarithm cost.
no code implementations • NeurIPS 2021 • Guanlin Liu, Lifeng Lai
In this paper, we introduce a new class of attacks named action poisoning attacks, where an adversary can change the action signal selected by the agent.
no code implementations • 29 Sep 2021 • Minhui Huang, Shiqian Ma, Lifeng Lai
This paper studies the equitable and optimal transport (EOT) problem, which has many applications such as fair division problems and optimal transport with multiple agents etc.
no code implementations • 30 Mar 2021 • Puning Zhao, Lifeng Lai
In this paper, we analyze the continuous armed bandit problems for nonconvex cost functions under certain smoothness and sublevel set assumptions.
no code implementations • 5 Feb 2021 • Minhui Huang, Shiqian Ma, Lifeng Lai
One of the popular solution methods for this task is to compute the barycenter of the probability measures under the Wasserstein metric.
no code implementations • 9 Dec 2020 • Minhui Huang, Shiqian Ma, Lifeng Lai
We show that the complexity of arithmetic operations for RBCD to obtain an $\epsilon$-stationary point is $O(\epsilon^{-3})$.
no code implementations • 20 Oct 2020 • Fuwei Li, Lifeng Lai, Shuguang Cui
We formulate the modification strategy of the adversary as a bi-level optimization problem.
no code implementations • 30 Sep 2020 • Puning Zhao, Lifeng Lai
We show that kNN density estimation is minimax optimal under both $\ell_1$ and $\ell_\infty$ criteria, if the support set is known.
no code implementations • 18 Aug 2020 • Minhui Huang, Shiqian Ma, Lifeng Lai
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
no code implementations • 29 Feb 2020 • Fuwei Li, Lifeng Lai, Shuguang Cui
In this paper, we investigate how to manipulate the coefficients obtained via linear regression by adding carefully designed poisoning data points to the dataset or modify the original data points.
no code implementations • 26 Feb 2020 • Puning Zhao, Lifeng Lai
Estimating Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
no code implementations • 19 Feb 2020 • Guanlin Liu, Lifeng Lai
To defend against this class of attacks, we introduce a novel algorithm that is robust to action-manipulation attacks when an upper bound for the total attack cost is given.
no code implementations • 22 Oct 2019 • Puning Zhao, Lifeng Lai
For both classification and regression problems, existing works have shown that, if the distribution of the feature vector has bounded support and the probability density function is bounded away from zero in its support, the convergence rate of the standard kNN method, in which k is the same for all test samples, is minimax optimal.
no code implementations • 17 Aug 2019 • Fuwei Li, Lifeng Lai, Shuguang Cui
We first characterize the optimal rank-one attack strategy that maximizes the subspace distance between the subspace learned from the original data matrix and that learned from the modified data matrix.
no code implementations • 27 Mar 2019 • Erhan Bayraktar, Lifeng Lai
In this paper, we investigate the adversarial robustness of multivariate $M$-Estimators.
no code implementations • 31 Dec 2018 • Weiyu Xu, Lifeng Lai, Amin Khajehnejad
In this paper, we study the inaccuracies of opinion polls in the 2016 election through the lens of information theory.
no code implementations • 2 Dec 2018 • Jun Geng, Lifeng Lai
In the considered best action identification problem, instead of minimizing the accumulative regret as done in existing works, the learner aims to obtain an accurate estimate of the underlying parameter based on his action and reward sequences.
no code implementations • 27 Oct 2018 • Puning Zhao, Lifeng Lai
Existing work has analyzed the convergence rate of this estimator for random variables whose densities are bounded away from zero in its support.
no code implementations • 14 Mar 2017 • Myung Cho, Lifeng Lai, Weiyu Xu
Additionally, we show that adapting number of local and global iterations to network communication delays in the distributed dual coordinated ascent algorithm can improve its convergence speed.
no code implementations • 18 Sep 2016 • Mostafa El Gamal, Lifeng Lai
In this paper, we study the randomized distributed coordinate descent algorithm with quantized updates.
no code implementations • 15 Sep 2015 • Bingwen Zhang, Weiyu Xu, Jian-Feng Cai, Lifeng Lai
Characterizing the phase transitions of convex optimizations in recovering structured signals or data is of central importance in compressed sensing, machine learning and statistics.
no code implementations • 11 Aug 2015 • Mostafa El Gamal, Lifeng Lai
We consider a distributed parameter estimation problem, in which multiple terminals send messages related to their local observations using limited rates to a fusion center who will obtain an estimate of a parameter related to observations of all terminals.