Search Results for author: Yuval Dagan

Found 20 papers, 3 papers with code

From External to Swap Regret 2.0: An Efficient Reduction and Oblivious Adversary for Large Action Spaces

no code implementations30 Oct 2023 Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich

We provide a novel reduction from swap-regret minimization to external-regret minimization, which improves upon the classical reductions of Blum-Mansour [BM07] and Stolz-Lugosi [SL05] in that it does not require finiteness of the space of actions.

Online Learning and Solving Infinite Games with an ERM Oracle

no code implementations4 Jul 2023 Angelos Assos, Idan Attias, Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson

In this setting, we provide learning algorithms that only rely on best response oracles and converge to approximate-minimax equilibria in two-player zero-sum games and approximate coarse correlated equilibria in multi-player general-sum games, as long as the game has a bounded fat-threshold dimension.

Binary Classification

Ambient Diffusion: Learning Clean Distributions from Corrupted Data

1 code implementation NeurIPS 2023 Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans

We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.

Learning and Testing Latent-Tree Ising Models Efficiently

no code implementations23 Nov 2022 Davin Choo, Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros

We provide time- and sample-efficient algorithms for learning and testing latent-tree Ising models, i. e. Ising models that may only be observed at their leaf nodes.

EM's Convergence in Gaussian Latent Tree Models

no code implementations21 Nov 2022 Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros

Our results for the landscape of the log-likelihood function in general latent tree models provide support for the extensive practical use of maximum likelihood based-methods in this setting.

Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for Inverse Problems

2 code implementations18 Jun 2022 Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, Constantinos Daskalakis

In practice, to allow for increased expressivity, we propose to do posterior sampling in the latent space of a pre-trained generative model.

Smoothed Online Learning is as Easy as Statistical Learning

no code implementations9 Feb 2022 Adam Block, Yuval Dagan, Noah Golowich, Alexander Rakhlin

We then prove a lower bound on the oracle complexity of any proper learning algorithm, which matches the oracle-efficient upper bounds up to a polynomial factor, thus demonstrating the existence of a statistical-computational gap in smooth online learning.

Learning Theory Multi-Armed Bandits

Statistical Estimation from Dependent Data

no code implementations20 Jul 2021 Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Surbhi Goel, Anthimos Vardis Kandiros

We consider a general statistical estimation problem wherein binary labels across different observations are not independent conditioned on their feature vectors, but dependent, capturing settings where e. g. these observations are collected on a spatial domain, a temporal domain, or a social network, which induce dependencies.

regression text-classification +1

Majorizing Measures, Sequential Complexities, and Online Learning

no code implementations2 Feb 2021 Adam Block, Yuval Dagan, Sasha Rakhlin

We introduce the technique of generic chaining and majorizing measures for controlling sequential Rademacher complexity.

Adversarial Laws of Large Numbers and Optimal Regret in Online Classification

no code implementations22 Jan 2021 Noga Alon, Omri Ben-Eliezer, Yuval Dagan, Shay Moran, Moni Naor, Eylon Yogev

Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample.

General Classification Open-Ended Question Answering +1

A bounded-noise mechanism for differential privacy

1 code implementation7 Dec 2020 Yuval Dagan, Gil Kur

We present an asymptotically optimal $(\epsilon,\delta)$ differentially private mechanism for answering multiple, adaptively asked, $\Delta$-sensitive queries, settling the conjecture of Steinke and Ullman [2020].

Learning Ising models from one or multiple samples

no code implementations20 Apr 2020 Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros

As corollaries of our main theorem, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold.

PAC learning with stable and private predictions

no code implementations24 Nov 2019 Yuval Dagan, Vitaly Feldman

For $\epsilon$-differentially private prediction we give two new algorithms: one using $\tilde O(d/(\alpha^2\epsilon))$ samples and another one using $\tilde O(d^2/(\alpha\epsilon) + d/\alpha^2)$ samples.

Binary Classification PAC learning

Interaction is necessary for distributed learning with privacy or communication constraints

no code implementations11 Nov 2019 Yuval Dagan, Vitaly Feldman

Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model.

Learning from weakly dependent data under Dobrushin's condition

no code implementations21 Jun 2019 Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti

Indeed, we show that the standard complexity measures of Gaussian and Rademacher complexities and VC dimension are sufficient measures of complexity for the purposes of bounding the generalization error and learning rates of hypothesis classes in our setting.

Generalization Bounds Learning Theory +2

Optimality of Maximum Likelihood for Log-Concave Density Estimation and Bounded Convex Regression

no code implementations13 Mar 2019 Gil Kur, Yuval Dagan, Alexander Rakhlin

In this paper, we study two problems: (1) estimation of a $d$-dimensional log-concave distribution and (2) bounded multivariate convex regression with random design with an underlying log-concave density or a compactly supported distribution with a continuous density.

Density Estimation regression

Space lower bounds for linear prediction in the streaming model

no code implementations9 Feb 2019 Yuval Dagan, Gil Kur, Ohad Shamir

We show that fundamental learning tasks, such as finding an approximate linear separator or linear regression, require memory at least \emph{quadratic} in the dimension, in a natural streaming setting.

regression

A Better Resource Allocation Algorithm with Semi-Bandit Feedback

no code implementations28 Mar 2018 Yuval Dagan, Koby Crammer

We study a sequential resource allocation problem between a fixed number of arms.

Detecting Correlations with Little Memory and Communication

no code implementations4 Mar 2018 Yuval Dagan, Ohad Shamir

We study the problem of identifying correlations in multivariate data, under information constraints: Either on the amount of memory that can be used by the algorithm, or the amount of communication when the data is distributed across several machines.

Twenty (simple) questions

no code implementations5 Nov 2016 Yuval Dagan, Yuval Filmus, Ariel Gabizon, Shay Moran

An optimal strategy for the "20 questions" game is given by a Huffman code for $\pi$: Bob's questions reveal the codeword for $x$ bit by bit.

Cannot find the paper you are looking for? You can Submit a new open access paper.