Search Results for author: Zhou Lu

Found 10 papers, 2 papers with code

Adaptive Gradient Methods with Local Guarantees

no code implementations2 Mar 2022 Zhou Lu, Wenhan Xia, Sanjeev Arora, Elad Hazan

Adaptive gradient methods are the method of choice for optimization in machine learning and used to train the largest deep models.

online learning

Tight lower bounds for Differentially Private ERM

no code implementations29 Sep 2021 Daogao Liu, Zhou Lu

We consider the lower bounds of differentially private ERM for general convex functions.

The Convergence Rate of SGD's Final Iterate: Analysis on Dimension Dependence

no code implementations28 Jun 2021 Daogao Liu, Zhou Lu

The best known lower bounds, however, are worse than the upper bounds by a factor of $\log T$.

Lower Bounds for Differentially Private ERM: Unconstrained and Non-Euclidean

no code implementations28 May 2021 Daogao Liu, Zhou Lu

We consider the lower bounds of differentially private empirical risk minimization (DP-ERM) for convex functions in constrained/unconstrained cases with respect to the general $\ell_p$ norm beyond the $\ell_2$ norm considered by most of the previous works.

Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons

2 code implementations10 Feb 2021 Bohang Zhang, Tianle Cai, Zhou Lu, Di He, LiWei Wang

This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs.

A Note on the Representation Power of GHHs

no code implementations27 Jan 2021 Zhou Lu

In this note we prove a sharp lower bound on the necessary number of nestings of nested absolute-value functions of generalized hinging hyperplanes (GHH) to represent arbitrary CPWL functions.

A Tight Lower Bound for Uniformly Stable Algorithms

no code implementations24 Dec 2020 Qinghua Liu, Zhou Lu

In this paper we fill the gap by proving a tight generalization lower bound of order $\Omega(\gamma+\frac{L}{\sqrt{n}})$, which matches the best known upper bound up to logarithmic factors

Generalization Bounds Learning Theory

A Note on John Simplex with Positive Dilation

no code implementations7 Dec 2020 Zhou Lu

We prove a Johns theorem for simplices in $R^d$ with positive dilation factor $d+2$, which improves the previously known $d^2$ upper bound.

Boosting for Control of Dynamical Systems

no code implementations ICML 2020 Naman Agarwal, Nataly Brukhim, Elad Hazan, Zhou Lu

We study the question of how to aggregate controllers for dynamical systems in order to improve their performance.

The Expressive Power of Neural Networks: A View from the Width

1 code implementation NeurIPS 2017 Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, Li-Wei Wang

That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound.

Cannot find the paper you are looking for? You can Submit a new open access paper.