1 code implementation • 26 May 2024 • Itai Shufaro, Nadav Merlis, Nir Weinberger, Shie Mannor
We study the trade-off between the information an agent accumulates and the regret it suffers.
no code implementations • 11 Mar 2024 • Neria Uzan, Nir Weinberger
We propose a game-based formulation for learning dimensionality-reducing representations of feature vectors, when only a prior knowledge on future prediction tasks is available.
no code implementations • 20 Feb 2024 • Omer Cohen, Ron Meir, Nir Weinberger
In the single source case, we propose an elimination learning method, whose risk matches that of a strong-oracle learner.
no code implementations • 3 Feb 2024 • Dror Freirich, Nir Weinberger, Ron Meir
We provide a structural characterization of the DP tradeoff, where the DP function is piecewise linear in the perception index.
no code implementations • 23 Jan 2024 • Daniel Goldfarb, Itay Evron, Nir Weinberger, Daniel Soudry, Paul Hand
Previous works have analyzed separately how forgetting is affected by either task similarity or overparameterization.
no code implementations • 18 Jan 2024 • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger
Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence.
no code implementations • NeurIPS 2023 • Chen Zeno, Greg Ongie, Yaniv Blumenfeld, Nir Weinberger, Daniel Soudry
Neural network (NN) denoisers are an essential building block in many common tasks, ranging from image reconstruction to image generation.
no code implementations • 6 Sep 2022 • Nir Weinberger, Michal Yemini
Additionally, under the assumption that the \textit{exact} alphabet size is unknown, and instead the player only knows a loose upper bound on it, a UCB-based algorithm is proposed, in which the player aims to reduce the regret caused by the unknown alphabet size in a finite time regime.
no code implementations • 6 Jun 2022 • Yihan Zhang, Nir Weinberger
In this model, an estimator observes $n$ samples of a $d$-dimensional parameter vector $\theta_{*}\in\mathbb{R}^{d}$, multiplied by a random sign $ S_i $ ($1\le i\le n$), and corrupted by isotropic standard Gaussian noise.
no code implementations • 4 Feb 2022 • Tom Norman, Nir Weinberger, Kfir Y. Levy
In this work we go beyond these assumptions and investigate robust regression under a more general set of assumptions: $\textbf{(i)}$ we allow the covariance matrix to be either positive definite or positive semi definite, $\textbf{(ii)}$ we do not necessarily assume that the features are centered, $\textbf{(iii)}$ we make no further assumption beyond boundedness (sub-Gaussianity) of features and measurement noise.
no code implementations • 29 Mar 2021 • Nir Weinberger, Guy Bresler
For the empirical iteration based on $n$ samples, we show that when initialized at $\theta_{0}=0$, the EM algorithm adaptively achieves the minimax error rate $\tilde{O}\Big(\min\Big\{\frac{1}{(1-2\delta_{*})}\sqrt{\frac{d}{n}},\frac{1}{\|\theta_{*}\|}\sqrt{\frac{d}{n}},\left(\frac{d}{n}\right)^{1/4}\Big\}\Big)$ in no more than $O\Big(\frac{1}{\|\theta_{*}\|(1-2\delta_{*})}\Big)$ iterations (with high probability).