Search Results for author: Yehuda Dar

Found 11 papers, 1 papers with code

Recovery of Training Data from Overparameterized Autoencoders: An Inverse Problem Perspective

no code implementations4 Oct 2023 Koren Abitbul, Yehuda Dar

In our inverse problem, we use the trained autoencoder to implicitly define a regularizer for the particular training dataset that we aim to retrieve from.

Retrieval

Frozen Overparameterization: A Double Descent Perspective on Transfer Learning of Deep Neural Networks

no code implementations20 Nov 2022 Yehuda Dar, Lorenzo Luzi, Richard G. Baraniuk

We study how the generalization behavior of transfer learning is affected by the dataset size in the source and target tasks, the number of transferred layers that are kept frozen in the target DNN training, and the similarity between the source and target tasks.

Image Classification Transfer Learning

A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning

no code implementations6 Sep 2021 Yehuda Dar, Vidya Muthukumar, Richard G. Baraniuk

The rapid recent progress in machine learning (ML) has raised a number of scientific questions that challenge the longstanding dogma of the field.

Double Descent and Other Interpolation Phenomena in GANs

no code implementations7 Jun 2021 Lorenzo Luzi, Yehuda Dar, Richard Baraniuk

We show that overparameterization can improve generalization performance and accelerate the training process.

The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression

no code implementations9 Mar 2021 Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk

We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.

Philosophy regression +1

Regularized Compression of MRI Data: Modular Optimization of Joint Reconstruction and Coding

no code implementations8 Oct 2020 Veronica Corona, Yehuda Dar, Guy Williams, Carola-Bibiane Schönlieb

In this work we propose a framework for joint optimization of the MRI reconstruction and lossy compression, producing compressed representations of medical images that achieve improved trade-offs between quality and bit-rate.

Data Compression Image Compression +2

Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks

no code implementations12 Jun 2020 Yehuda Dar, Richard G. Baraniuk

We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i. e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the relation between the two tasks.

regression Transfer Learning

Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors

no code implementations ICML 2020 Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk

We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.

Postprocessing of Compressed Images via Sequential Denoising

no code implementations30 Oct 2015 Yehuda Dar, Alfred M. Bruckstein, Michael Elad, Raja Giryes

In this work we propose a novel postprocessing technique for compression-artifact reduction.

Image Denoising

Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and Analysis

no code implementations12 Apr 2014 Yehuda Dar, Alfred M. Bruckstein

In this paper, we study the effect of frame-rate and compression bit-rate on block-based ME and MC as commonly utilized in inter-frame coding and frame-rate up conversion (FRUC).

Motion Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.