no code implementations • 17 Jan 2024 • Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan
In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.
no code implementations • 30 Nov 2023 • Zhou Lu
This setting assumes a predetermined ground-truth hypothesis and considers non-uniform, hypothesis-wise error bounds.
no code implementations • 25 Sep 2023 • Zhou Lu
Specifically, we present a learning task that is NP-hard for unimodal learning but is solvable in polynomial time by a multimodal algorithm.
no code implementations • NeurIPS 2023 • Zhou Lu
Human perception of the empirical world involves recognizing the diverse appearances, or 'modalities', of underlying objects.
no code implementations • 22 Nov 2022 • Zhou Lu, Nataly Brukhim, Paula Gradu, Elad Hazan
The most common approach is based on the Frank-Wolfe method, that uses linear optimization computation in lieu of projections.
no code implementations • 1 Jul 2022 • Zhou Lu, Elad Hazan
In online convex optimization, the player aims to minimize regret, or the difference between her loss and that of the best fixed decision in hindsight over the entire repeated game.
no code implementations • 1 Jun 2022 • Xinyi Chen, Elad Hazan, Tongyang Li, Zhou Lu, Xinzhao Wang, Rui Yang
In the fundamental problem of shadow tomography, the goal is to efficiently learn an unknown $d$-dimensional quantum state using projective measurements.
no code implementations • 30 May 2022 • Udaya Ghai, Zhou Lu, Elad Hazan
We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem.
no code implementations • 2 Mar 2022 • Zhou Lu, Wenhan Xia, Sanjeev Arora, Elad Hazan
Adaptive gradient methods are the method of choice for optimization in machine learning and used to train the largest deep models.
no code implementations • 29 Sep 2021 • Daogao Liu, Zhou Lu
We consider the lower bounds of differentially private ERM for general convex functions.
no code implementations • 28 Jun 2021 • Daogao Liu, Zhou Lu
The best known lower bounds, however, are worse than the upper bounds by a factor of $\log T$.
no code implementations • 28 May 2021 • Daogao Liu, Zhou Lu
We consider the lower bounds of differentially private empirical risk minimization (DP-ERM) for convex functions in constrained/unconstrained cases with respect to the general $\ell_p$ norm beyond the $\ell_2$ norm considered by most of the previous works.
2 code implementations • 10 Feb 2021 • Bohang Zhang, Tianle Cai, Zhou Lu, Di He, LiWei Wang
This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs.
no code implementations • 27 Jan 2021 • Zhou Lu
In this note we prove a sharp lower bound on the necessary number of nestings of nested absolute-value functions of generalized hinging hyperplanes (GHH) to represent arbitrary CPWL functions.
no code implementations • 24 Dec 2020 • Qinghua Liu, Zhou Lu
In this paper we fill the gap by proving a tight generalization lower bound of order $\Omega(\gamma+\frac{L}{\sqrt{n}})$, which matches the best known upper bound up to logarithmic factors
no code implementations • 7 Dec 2020 • Zhou Lu
We prove a Johns theorem for simplices in $R^d$ with positive dilation factor $d+2$, which improves the previously known $d^2$ upper bound.
no code implementations • ICML 2020 • Naman Agarwal, Nataly Brukhim, Elad Hazan, Zhou Lu
We study the question of how to aggregate controllers for dynamical systems in order to improve their performance.
1 code implementation • NeurIPS 2017 • Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, Li-Wei Wang
That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound.