no code implementations • 14 Oct 2024 • Sharon Hendy, Yehuda Dar
Accordingly, our results for image datasets show that the representation of test data is improved by TL-PCA for dimensionality reduction where the learned subspace dimension is lower or higher than the number of target data examples.
no code implementations • 27 May 2024 • Yuval Sharon, Yehuda Dar
We explore the representational similarity of DNN layers, each layer with respect to its own representations throughout the training process.
no code implementations • 4 Oct 2023 • Koren Abitbul, Yehuda Dar
Given a degraded training image, we define the recovery of the original training image as an inverse problem and formulate it as an optimization task.
no code implementations • 20 Nov 2022 • Yehuda Dar, Lorenzo Luzi, Richard G. Baraniuk
We study how the generalization behavior of transfer learning is affected by the dataset size in the source and target tasks, the number of transferred layers that are kept frozen in the target DNN training, and the similarity between the source and target tasks.
1 code implementation • CVPR 2022 • Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, Tom Goldstein
We also use decision boundary methods to visualize double descent phenomena.
no code implementations • 6 Sep 2021 • Yehuda Dar, Vidya Muthukumar, Richard G. Baraniuk
The rapid recent progress in machine learning (ML) has raised a number of scientific questions that challenge the longstanding dogma of the field.
no code implementations • 7 Jun 2021 • Lorenzo Luzi, Yehuda Dar, Richard Baraniuk
We show that overparameterization can improve generalization performance and accelerate the training process.
no code implementations • 9 Mar 2021 • Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk
We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.
no code implementations • 8 Oct 2020 • Veronica Corona, Yehuda Dar, Guy Williams, Carola-Bibiane Schönlieb
In this work we propose a framework for joint optimization of the MRI reconstruction and lossy compression, producing compressed representations of medical images that achieve improved trade-offs between quality and bit-rate.
no code implementations • 12 Jun 2020 • Yehuda Dar, Richard G. Baraniuk
We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i. e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the relation between the two tasks.
no code implementations • ICML 2020 • Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk
We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.
no code implementations • 30 Oct 2015 • Yehuda Dar, Alfred M. Bruckstein, Michael Elad, Raja Giryes
In this work we propose a novel postprocessing technique for compression-artifact reduction.
no code implementations • 12 Apr 2014 • Yehuda Dar, Alfred M. Bruckstein
In this paper, we study the effect of frame-rate and compression bit-rate on block-based ME and MC as commonly utilized in inter-frame coding and frame-rate up conversion (FRUC).