Search Results for author: Zhou Lu

Found 18 papers, 2 papers with code

Adaptive Regret for Bandits Made Possible: Two Queries Suffice

no code implementations17 Jan 2024 Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan

In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.

Hyperparameter Optimization Multi-Armed Bandits

Non-uniform Online Learning: Towards Understanding Induction

no code implementations30 Nov 2023 Zhou Lu

This setting assumes a predetermined ground-truth hypothesis and considers non-uniform, hypothesis-wise error bounds.

Decision Making Learning Theory +1

On the Computational Benefit of Multimodal Learning

no code implementations25 Sep 2023 Zhou Lu

Specifically, we present a learning task that is NP-hard for unimodal learning but is solvable in polynomial time by a multimodal algorithm.

A Theory of Multimodal Learning

no code implementations NeurIPS 2023 Zhou Lu

Human perception of the empirical world involves recognizing the diverse appearances, or 'modalities', of underlying objects.

Philosophy

Projection-free Adaptive Regret with Membership Oracles

no code implementations22 Nov 2022 Zhou Lu, Nataly Brukhim, Paula Gradu, Elad Hazan

The most common approach is based on the Frank-Wolfe method, that uses linear optimization computation in lieu of projections.

On the Computational Efficiency of Adaptive and Dynamic Regret Minimization

no code implementations1 Jul 2022 Zhou Lu, Elad Hazan

In online convex optimization, the player aims to minimize regret, or the difference between her loss and that of the best fixed decision in hindsight over the entire repeated game.

Computational Efficiency

Adaptive Online Learning of Quantum States

no code implementations1 Jun 2022 Xinyi Chen, Elad Hazan, Tongyang Li, Zhou Lu, Xinzhao Wang, Rui Yang

In the fundamental problem of shadow tomography, the goal is to efficiently learn an unknown $d$-dimensional quantum state using projective measurements.

Non-convex online learning via algorithmic equivalence

no code implementations30 May 2022 Udaya Ghai, Zhou Lu, Elad Hazan

We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem.

Adaptive Gradient Methods with Local Guarantees

no code implementations2 Mar 2022 Zhou Lu, Wenhan Xia, Sanjeev Arora, Elad Hazan

Adaptive gradient methods are the method of choice for optimization in machine learning and used to train the largest deep models.

Benchmarking

Tight lower bounds for Differentially Private ERM

no code implementations29 Sep 2021 Daogao Liu, Zhou Lu

We consider the lower bounds of differentially private ERM for general convex functions.

The Convergence Rate of SGD's Final Iterate: Analysis on Dimension Dependence

no code implementations28 Jun 2021 Daogao Liu, Zhou Lu

The best known lower bounds, however, are worse than the upper bounds by a factor of $\log T$.

Open-Ended Question Answering

Lower Bounds for Differentially Private ERM: Unconstrained and Non-Euclidean

no code implementations28 May 2021 Daogao Liu, Zhou Lu

We consider the lower bounds of differentially private empirical risk minimization (DP-ERM) for convex functions in constrained/unconstrained cases with respect to the general $\ell_p$ norm beyond the $\ell_2$ norm considered by most of the previous works.

Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons

2 code implementations10 Feb 2021 Bohang Zhang, Tianle Cai, Zhou Lu, Di He, LiWei Wang

This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs.

A Note on the Representation Power of GHHs

no code implementations27 Jan 2021 Zhou Lu

In this note we prove a sharp lower bound on the necessary number of nestings of nested absolute-value functions of generalized hinging hyperplanes (GHH) to represent arbitrary CPWL functions.

LEMMA

A Tight Lower Bound for Uniformly Stable Algorithms

no code implementations24 Dec 2020 Qinghua Liu, Zhou Lu

In this paper we fill the gap by proving a tight generalization lower bound of order $\Omega(\gamma+\frac{L}{\sqrt{n}})$, which matches the best known upper bound up to logarithmic factors

Generalization Bounds Learning Theory

A Note on John Simplex with Positive Dilation

no code implementations7 Dec 2020 Zhou Lu

We prove a Johns theorem for simplices in $R^d$ with positive dilation factor $d+2$, which improves the previously known $d^2$ upper bound.

Boosting for Control of Dynamical Systems

no code implementations ICML 2020 Naman Agarwal, Nataly Brukhim, Elad Hazan, Zhou Lu

We study the question of how to aggregate controllers for dynamical systems in order to improve their performance.

The Expressive Power of Neural Networks: A View from the Width

1 code implementation NeurIPS 2017 Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, Li-Wei Wang

That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound.

Cannot find the paper you are looking for? You can Submit a new open access paper.