1 code implementation • ICML 2020 • Reinhard Heckel, Mahdi Soltanolkotabi
For signal recovery from a few measurements, however, un-trained convolutional networks have an intriguing self-regularizing property: Even though the network can perfectly fit any image, the network recovers a natural image from few measurements when trained with gradient descent until convergence.
no code implementations • 24 Dec 2024 • Anselm Krainovic, Stefan Ruschke, Reinhard Heckel
Towards resolving the sensitivity to resolution-shifts, we investigate state-of-the-art model-based approaches including Gaussian splatting, neural representations, and infinite-dimensional diffusion models, as well as a simple data-centric approach of training the diffusion model on several resolutions.
no code implementations • 3 Dec 2024 • Youssef Mansour, Reinhard Heckel
We investigate biases in pretraining datasets for large language models (LLMs) through dataset classification experiments.
no code implementations • 14 Sep 2024 • Tobit Klug, Kun Wang, Stefan Ruschke, Reinhard Heckel
In this paper, we propose a deep learning-based test-time-training method for accurate motion estimation.
3 code implementations • 17 Jun 2024 • Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, HANLIN ZHANG, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar
We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models.
no code implementations • 24 Apr 2024 • Reinhard Heckel, Mathews Jacob, Akshay Chaudhari, Or Perlman, Efrat Shimron
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology.
no code implementations • 31 Mar 2024 • Youssef Mansour, Reinhard Heckel
The network is a simple shallow network with an efficient block that implements global additive multidimensional averaging operations.
1 code implementation • 13 Mar 2024 • Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar, Suchin Gururangan, Mitchell Wortsman, Rulin Shao, Jean Mercat, Alex Fang, Jeffrey Li, Sedrick Keh, Rui Xin, Marianna Nezhurina, Igor Vasiljevic, Jenia Jitsev, Luca Soldaini, Alexandros G. Dimakis, Gabriel Ilharco, Pang Wei Koh, Shuran Song, Thomas Kollar, Yair Carmon, Achal Dave, Reinhard Heckel, Niklas Muennighoff, Ludwig Schmidt
Second, we relate the perplexity of a language model to its downstream task performance by proposing a power law.
1 code implementation • 16 Dec 2023 • Kang Lin, Reinhard Heckel
Furthermore, training on such a diverse dataset does not compromise in-distribution performance, i. e., a model trained on diverse data yields in-distribution performance at least as good as models trained on the more narrow individual distributions.
1 code implementation • 9 Nov 2023 • Simon Wiedemann, Reinhard Heckel
Cryogenic electron tomography is a technique for imaging biological samples in 3D.
no code implementations • 5 Aug 2023 • Stefan Bamberger, Reinhard Heckel, Felix Krahmer
Furthermore, we also consider the approximation of general positive homogeneous functions with neural networks.
1 code implementation • 5 Aug 2023 • Frederic Wang, Han Qi, Alfredo De Goyeneche, Reinhard Heckel, Michael Lustig, Efrat Shimron
In each training iteration, rather than using the fully sampled k-space for computing gradients, we use only a small k-space portion.
1 code implementation • 11 May 2023 • Johannes F. Kunz, Stefan Ruschke, Reinhard Heckel
In this paper, we propose a reconstruction approach based on representing the beating heart with an implicit neural network and fitting the network so that the representation of the heart is consistent with the measurements.
no code implementations • CVPR 2023 • Youssef Mansour, Reinhard Heckel
In this work we show that a simple 2-layer network, without any training data or knowledge of the noise distribution, can enable high-quality image denoising at low computational cost.
1 code implementation • 20 Oct 2022 • Daniel LeJeune, Jiayu Liu, Reinhard Heckel
Machine learning systems are often applied to data that is drawn from a different distribution than the training distribution.
1 code implementation • 9 Oct 2022 • Fatih Furkan Yilmaz, Reinhard Heckel
To provide such sets, conformal predictors often estimate a cutoff threshold for the probability estimates based on a calibration set.
1 code implementation • 27 Sep 2022 • Tobit Klug, Reinhard Heckel
Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains.
no code implementations • 29 Jun 2022 • Jonathan Scarlett, Reinhard Heckel, Miguel R. D. Rodrigues, Paul Hand, Yonina C. Eldar
In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution.
1 code implementation • 3 Jun 2022 • Fatih Furkan Yilmaz, Reinhard Heckel
The risk of overparameterized models, in particular deep neural networks, is often double-descent shaped as a function of the model size.
1 code implementation • 14 Apr 2022 • Mohammad Zalbagi Darestani, Jiayu Liu, Reinhard Heckel
We show that for four natural distribution shifts, this method essentially closes the distribution shift performance gap for state-of-the-art architectures for accelerated MRI.
1 code implementation • 4 Feb 2022 • Youssef Mansour, Kang Lin, Reinhard Heckel
Similar to the original MLP-mixer, the image-to-image MLP-mixer is based exclusively on MLPs operating on linearly-transformed image patches.
1 code implementation • 9 Dec 2021 • Reinhard Heckel
We propose a simple approach to overcome this: Regularizing training of a new task with sketches of the Jacobian matrix of past data.
1 code implementation • 24 Sep 2021 • Samuel Rey, Santiago Segarra, Reinhard Heckel, Antonio G. Marques
This paper introduces two untrained graph neural network architectures for graph signal denoising, provides theoretical guarantees for their denoising capabilities in a simple setup, and numerically validates the theoretical results in more general scenarios.
2 code implementations • NeurIPS 2021 • Konstantin Donhauser, Alexandru Ţifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers.
2 code implementations • 28 Jun 2021 • Zalan Fabian, Reinhard Heckel, Mahdi Soltanolkotabi
Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks.
1 code implementation • ICML Workshop AML 2021 • Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
Numerous recent works show that overparameterization implicitly reduces variance, suggesting vanishing benefits for explicit regularization in high dimensions.
1 code implementation • 11 Feb 2021 • Mohammad Zalbagi Darestani, Akshay S. Chaudhari, Reinhard Heckel
In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as well as traditional sparsity-based methods.
no code implementations • 1 Jan 2021 • Zalan Fabian, Reinhard Heckel, Mahdi Soltanolkotabi
Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for image reconstruction tasks arising in medical imaging and explore its effectiveness at reducing the required training data in a variety of settings.
1 code implementation • ICLR 2021 • Reinhard Heckel, Fatih Furkan Yilmaz
Over-parameterized models, such as large deep networks, often exhibit a double descent phenomenon, whereas a function of model size, error first decreases, increases, and decreases at last.
4 code implementations • 6 Jul 2020 • Mohammad Zalbagi Darestani, Reinhard Heckel
Convolutional Neural Networks (CNNs) are highly effective for image reconstruction problems.
1 code implementation • 7 May 2020 • Reinhard Heckel, Mahdi Soltanolkotabi
For signal recovery from a few measurements, however, un-trained convolutional networks have an intriguing self-regularizing property: Even though the network can perfectly fit any image, the network recovers a natural image from few measurements when trained with gradient descent until convergence.
no code implementations • 23 Jan 2020 • Max Daniels, Paul Hand, Reinhard Heckel
In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder.
1 code implementation • ICLR 2020 • Reinhard Heckel, Mahdi Soltanolkotabi
A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the corrupted image.
1 code implementation • 20 Oct 2019 • Fatih Furkan Yilmaz, Reinhard Heckel
Image classification problems are typically addressed by first collecting examples with candidate labels, second cleaning the candidate labels manually, and third training a deep neural network on the clean examples.
no code implementations • 25 Sep 2019 • Fatih Furkan Yilmaz, Reinhard Heckel
Classification problems today are typically solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples.
no code implementations • 25 Sep 2019 • Max Daniels, Reinhard Heckel, Paul Hand
In this paper, we demonstrate a method for removing the representation error of a GAN when used as a prior in inverse problems by modeling images as the linear combination of a GAN with a Deep Decoder.
no code implementations • 22 Jul 2019 • Zhenwei Dai, Reinhard Heckel
This effect prevails in deep single-channel linear convolutional networks, and we show that without channel normalization, gradient descent takes at least exponentially many steps to come close to an optimum.
no code implementations • 6 Jul 2019 • Reinhard Heckel
We demonstrate that with both fixed and parameterized convolutional filters those networks enable representing images with few coefficients.
no code implementations • ICLR 2019 • Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image.
1 code implementation • 25 Feb 2019 • Daniel LeJeune, Richard G. Baraniuk, Reinhard Heckel
Algorithms often carry out equally many computations for "easy" and "hard" problem instances.
4 code implementations • ICLR 2019 • Reinhard Heckel, Paul Hand
In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters.
1 code implementation • 26 May 2018 • Christopher A. Metzler, Ali Mousavi, Reinhard Heckel, Richard G. Baraniuk
We show that, in the context of image recovery, SURE and its generalizations can be used to train convolutional neural networks (CNNs) for a range of image denoising and recovery problems without any ground truth data.
no code implementations • ICLR 2019 • Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation.
no code implementations • 4 Jan 2018 • Reinhard Heckel, Max Simchowitz, Kannan Ramchandran, Martin J. Wainwright
Accordingly, we study the problem of finding approximate rankings from pairwise comparisons.
no code implementations • 5 Oct 2017 • Nick Antipa, Grace Kuo, Reinhard Heckel, Ben Mildenhall, Emrah Bostan, Ren Ng, Laura Waller
We demonstrate a compact and easy-to-build computational camera for single-shot 3D imaging.
1 code implementation • ICML 2017 • Reinhard Heckel, Kannan Ramchandran
We consider the online one-class collaborative filtering (CF) problem that consists of recommending items to users over time in an online fashion based on positive ratings only.
no code implementations • 28 Jun 2016 • Reinhard Heckel, Nihar B. Shah, Kannan Ramchandran, Martin J. Wainwright
We first analyze a sequential ranking algorithm that counts the number of comparisons won, and uses these counts to decide whether to stop, or to compare another pair of items, chosen based on confidence intervals specified by the data collected up to that point.
1 code implementation • 7 Apr 2016 • Reinhard Heckel, Michail Vlachos, Thomas Parnell, Celestine Dünner
We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback.
no code implementations • 25 Jul 2015 • Reinhard Heckel, Michael Tschannen, Helmut Bölcskei
Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations, and dimensions are all unknown.
no code implementations • 27 Apr 2014 • Reinhard Heckel, Michael Tschannen, Helmut Bölcskei
Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, assumed unknown.
no code implementations • 13 Mar 2014 • Reinhard Heckel, Eirikur Agustsson, Helmut Bölcskei
Subspace clustering refers to the problem of clustering high-dimensional data points into a union of low-dimensional linear subspaces, where the number of subspaces, their dimensions and orientations are all unknown.
no code implementations • 13 Nov 2013 • Alexander Jung, Reinhard Heckel, Helmut Bölcskei, Franz Hlawatsch
We propose a method for inferring the conditional indepen- dence graph (CIG) of a high-dimensional discrete-time Gaus- sian vector random process from finite-length observations.
1 code implementation • 18 Jul 2013 • Reinhard Heckel, Helmut Bölcskei
We propose a simple low-complexity subspace clustering algorithm, which applies spectral clustering to an adjacency matrix obtained by thresholding the correlations between data points.
no code implementations • 15 May 2013 • Reinhard Heckel, Helmut Bölcskei
We consider the problem of clustering noisy high-dimensional data points into a union of low-dimensional subspaces and a set of outliers.