no code implementations • 22 Oct 2023 • Zhenghan Fang, Sam Buchanan, Jeremias Sulam
Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed.
no code implementations • 1 Jul 2023 • Ramchandran Muthukumar, Jeremias Sulam
In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations.
1 code implementation • 8 May 2023 • Ambar Pal, Jeremias Sulam
This method relies on taking a majority vote of any base classifier over multiple noise-perturbed inputs to obtain a smoothed classifier, and it remains the tool of choice to certify deep and complex neural network models.
1 code implementation • 7 Feb 2023 • Jacopo Teneggi, Matthew Tivnan, J. Webster Stayman, Jeremias Sulam
Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks.
1 code implementation • 29 Nov 2022 • Jacopo Teneggi, Paul H. Yi, Jeremias Sulam
We find that strong supervision (i. e., learning with local image-level annotations) and weak supervision (i. e., learning with only global examination-level labels) achieve comparable performance in examination-level hemorrhage detection (the task of selecting the images in an examination that show signs of hemorrhage) as well as in image-level hemorrhage detection (highlighting those signs within the selected images).
no code implementations • 9 Sep 2022 • Zhenghan Fang, Kuo-Wei Lai, Peter van Zijl, Xu Li, Jeremias Sulam
Experimental results using both simulation and in vivo human data demonstrate great improvement over state-of-the-art algorithms in terms of the reconstructed tensor image, principal eigenvector maps and tractography results, while allowing for tensor reconstruction with MR phase measured at much less than six different orientations.
1 code implementation • 14 Jul 2022 • Jacopo Teneggi, Beepul Bharti, Yaniv Romano, Jeremias Sulam
As a result, we further our understanding of Shapley-based explanation methods from a novel perspective and characterize under which conditions one can make statistically valid claims about feature importance via the Shapley value.
no code implementations • 26 Feb 2022 • Ramchandran Muthukumar, Jeremias Sulam
This work studies the adversarial robustness of parametric functions composed of a linear predictor and a non-linear representation map.
no code implementations • 8 Feb 2022 • Joshua Agterberg, Jeremias Sulam
Sparse Principal Component Analysis (PCA) is a prevalent tool across a plethora of subfields of applied statistics.
no code implementations • 14 Dec 2021 • Jeffrey A. Ruffolo, Jeffrey J. Gray, Jeremias Sulam
Understanding the composition of an individual's immune repertoire can provide insights into this process and reveal potential therapeutic antibodies.
1 code implementation • 22 Sep 2021 • Zhenzhen Wang, Carla Saoud, Sintawat Wangsiricharoen, Aaron W. James, Aleksander S. Popel, Jeremias Sulam
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
1 code implementation • NeurIPS 2021 • Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu
In contrast to existing landscape analysis for deep neural networks which is often disconnected from practice, our analysis of the simplified model not only does it explain what kind of features are learned in the last layer, but it also shows why they can be efficiently optimized in the simplified settings, matching the empirical observations in practical deep network architectures.
1 code implementation • 13 Apr 2021 • Jacopo Teneggi, Alexandre Luster, Jeremias Sulam
As modern complex neural networks keep breaking records and solving harder problems, their predictions also become less and less intelligible.
1 code implementation • NeurIPS 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
1 code implementation • NeurIPS 2020 • Jeremias Sulam, Ramchandran Muthukumar, Raman Arora
Several recent results provide theoretical insights into the phenomena of adversarial examples.
no code implementations • 19 Oct 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
1 code implementation • 11 Aug 2020 • Kuo-Wei Lai, Manisha Aggarwal, Peter van Zijl, Xu Li, Jeremias Sulam
More importantly, this framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements without the need for any ad-hoc rotation or re-training.
no code implementations • 16 Jul 2020 • Wenhao Gao, Sai Pooja Mahajan, Jeremias Sulam, Jeffrey J. Gray
Deep learning is catalyzing a scientific revolution fueled by big data, accessible toolkits, and powerful computational resources, impacting many fields including protein structural modeling.
no code implementations • 11 Jun 2020 • Jeremias Sulam, Chong You, Zhihui Zhu
We thoroughly demonstrate this observation in practice and provide an analysis of this phenomenon by tying recovery measures to generalization bounds.
1 code implementation • NeurIPS 2020 • Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal
Arguably, the two most popular accelerated or momentum-based optimization methods in machine learning are Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different discretizations of a particular second order differential equation with friction.
2 code implementations • 1 Nov 2018 • Ev Zisselman, Jeremias Sulam, Michael Elad
The Convolutional Sparse Coding (CSC) model has recently gained considerable traction in the signal and image processing communities.
no code implementations • 26 Jun 2018 • Dror Simon, Jeremias Sulam, Yaniv Romano, Yue M. Lu, Michael Elad
The proposed method adds controlled noise to the input and estimates a sparse representation from the perturbed signal.
2 code implementations • 2 Jun 2018 • Jeremias Sulam, Aviad Aberdam, Amir Beck, Michael Elad
Parsimonious representations are ubiquitous in modeling and processing information.
no code implementations • 29 May 2018 • Yaniv Romano, Aviad Aberdam, Jeremias Sulam, Michael Elad
Despite their impressive performance, deep convolutional neural networks (CNNs) have been shown to be sensitive to small adversarial perturbations.
no code implementations • 25 Apr 2018 • Aviad Aberdam, Jeremias Sulam, Michael Elad
The recently proposed multi-layer sparse model has raised insightful connections between sparse representations and convolutional neural networks (CNN).
no code implementations • 29 Aug 2017 • Jeremias Sulam, Vardan Papyan, Yaniv Romano, Michael Elad
We show that the training of the filters is essential to allow for non-trivial signals in the model, and we derive an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers.
1 code implementation • ICCV 2017 • Vardan Papyan, Yaniv Romano, Jeremias Sulam, Michael Elad
Convolutional Sparse Coding (CSC) is an increasingly popular model in the signal and image processing communities, tackling some of the limitations of traditional patch-based sparse representations.
no code implementations • 31 Jan 2016 • Jeremias Sulam, Boaz Ophir, Michael Zibulevsky, Michael Elad
Sparse representations has shown to be a very powerful model for real world signals, and has enabled the development of applications with notable performance.