Search Results for author: Adam Block

Found 14 papers, 0 papers with code

On the Performance of Empirical Risk Minimization with Smoothed Data

no code implementations22 Feb 2024 Adam Block, Alexander Rakhlin, Abhishek Shetty

In order to circumvent statistical and computational hardness results in sequential decision-making, recent work has considered smoothed online learning, where the distribution of data at each time is assumed to have bounded likeliehood ratio with respect to a base measure when conditioned on the history.

Decision Making

Oracle-Efficient Differentially Private Learning with Public Data

no code implementations13 Feb 2024 Adam Block, Mark Bun, Rathin Desai, Abhishek Shetty, Steven Wu

Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms.

Binary Classification Computational Efficiency

Efficient Model-Free Exploration in Low-Rank MDPs

no code implementations NeurIPS 2023 Zakaria Mhammedi, Adam Block, Dylan J. Foster, Alexander Rakhlin

A major challenge in reinforcement learning is to develop practical, sample-efficient algorithms for exploration in high-dimensional domains where generalization and function approximation is required.

Representation Learning

Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making

no code implementations10 Feb 2023 Adam Block, Alexander Rakhlin, Max Simchowitz

Smoothed online learning has emerged as a popular framework to mitigate the substantial loss in statistical and computational complexity that arises when one moves from classical to adversarial learning.

Decision Making Econometrics

The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning

no code implementations9 Feb 2023 Adam Block, Yury Polyanskiy

Suppose we are given access to $n$ independent samples from distribution $\mu$ and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution $\nu$.

Smoothed Online Learning for Prediction in Piecewise Affine Systems

no code implementations NeurIPS 2023 Adam Block, Max Simchowitz, Russ Tedrake

The problem of piecewise affine (PWA) regression and planning is of foundational importance to the study of online learning, control, and robotics, where it provides a theoretically and empirically tractable setting to study systems undergoing sharp changes in the dynamics.

Efficient and Near-Optimal Smoothed Online Learning for Generalized Linear Functions

no code implementations25 May 2022 Adam Block, Max Simchowitz

Due to the drastic gap in complexity between sequential and batch statistical learning, recent work has studied a smoothed sequential learning setting, where Nature is constrained to select contexts with density bounded by 1/{\sigma} with respect to a known measure {\mu}.

Counterfactual Learning To Rank for Utility-Maximizing Query Autocompletion

no code implementations22 Apr 2022 Adam Block, Rahul Kidambi, Daniel N. Hill, Thorsten Joachims, Inderjit S. Dhillon

A shortcoming of this approach is that users often do not know which query will provide the best retrieval performance on the current information retrieval system, meaning that any query autocompletion methods trained to mimic user behavior can lead to suboptimal query suggestions.

counterfactual Information Retrieval +2

Smoothed Online Learning is as Easy as Statistical Learning

no code implementations9 Feb 2022 Adam Block, Yuval Dagan, Noah Golowich, Alexander Rakhlin

We then prove a lower bound on the oracle complexity of any proper learning algorithm, which matches the oracle-efficient upper bounds up to a polynomial factor, thus demonstrating the existence of a statistical-computational gap in smooth online learning.

Learning Theory Multi-Armed Bandits

Intrinsic Dimension Estimation Using Wasserstein Distances

no code implementations8 Jun 2021 Adam Block, Zeyu Jia, Yury Polyanskiy, Alexander Rakhlin

It has long been thought that high-dimensional data encountered in many practical machine learning tasks have low-dimensional structure, i. e., the manifold hypothesis holds.

BIG-bench Machine Learning

Majorizing Measures, Sequential Complexities, and Online Learning

no code implementations2 Feb 2021 Adam Block, Yuval Dagan, Sasha Rakhlin

We introduce the technique of generic chaining and majorizing measures for controlling sequential Rademacher complexity.

Generative Modeling with Denoising Auto-Encoders and Langevin Sampling

no code implementations31 Jan 2020 Adam Block, Youssef Mroueh, Alexander Rakhlin

We show that both DAE and DSM provide estimates of the score of the Gaussian smoothed population density, allowing us to apply the machinery of Empirical Processes.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.