Search Results for author: Yehuda Dar

Found 13 papers, 1 papers with code

TL-PCA: Transfer Learning of Principal Component Analysis

no code implementations14 Oct 2024 Sharon Hendy, Yehuda Dar

Accordingly, our results for image datasets show that the representation of test data is improved by TL-PCA for dimensionality reduction where the learned subspace dimension is lower or higher than the number of target data examples.

Dimensionality Reduction Transfer Learning

How Do the Architecture and Optimizer Affect Representation Learning? On the Training Dynamics of Representations in Deep Neural Networks

no code implementations27 May 2024 Yuval Sharon, Yehuda Dar

We explore the representational similarity of DNN layers, each layer with respect to its own representations throughout the training process.

Memorization Representation Learning

How Much Training Data is Memorized in Overparameterized Autoencoders? An Inverse Problem Perspective on Memorization Evaluation

no code implementations4 Oct 2023 Koren Abitbul, Yehuda Dar

Given a degraded training image, we define the recovery of the original training image as an inverse problem and formulate it as an optimization task.

Memorization Retrieval

Frozen Overparameterization: A Double Descent Perspective on Transfer Learning of Deep Neural Networks

no code implementations20 Nov 2022 Yehuda Dar, Lorenzo Luzi, Richard G. Baraniuk

We study how the generalization behavior of transfer learning is affected by the dataset size in the source and target tasks, the number of transferred layers that are kept frozen in the target DNN training, and the similarity between the source and target tasks.

Image Classification Transfer Learning

A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning

no code implementations6 Sep 2021 Yehuda Dar, Vidya Muthukumar, Richard G. Baraniuk

The rapid recent progress in machine learning (ML) has raised a number of scientific questions that challenge the longstanding dogma of the field.

Double Descent and Other Interpolation Phenomena in GANs

no code implementations7 Jun 2021 Lorenzo Luzi, Yehuda Dar, Richard Baraniuk

We show that overparameterization can improve generalization performance and accelerate the training process.

The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression

no code implementations9 Mar 2021 Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk

We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.

Philosophy regression +1

Regularized Compression of MRI Data: Modular Optimization of Joint Reconstruction and Coding

no code implementations8 Oct 2020 Veronica Corona, Yehuda Dar, Guy Williams, Carola-Bibiane Schönlieb

In this work we propose a framework for joint optimization of the MRI reconstruction and lossy compression, producing compressed representations of medical images that achieve improved trade-offs between quality and bit-rate.

Data Compression Image Compression +2

Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks

no code implementations12 Jun 2020 Yehuda Dar, Richard G. Baraniuk

We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i. e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the relation between the two tasks.

regression Transfer Learning

Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors

no code implementations ICML 2020 Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk

We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.

Postprocessing of Compressed Images via Sequential Denoising

no code implementations30 Oct 2015 Yehuda Dar, Alfred M. Bruckstein, Michael Elad, Raja Giryes

In this work we propose a novel postprocessing technique for compression-artifact reduction.

Image Denoising

Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and Analysis

no code implementations12 Apr 2014 Yehuda Dar, Alfred M. Bruckstein

In this paper, we study the effect of frame-rate and compression bit-rate on block-based ME and MC as commonly utilized in inter-frame coding and frame-rate up conversion (FRUC).

Motion Estimation Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.