no code implementations • 4 Mar 2025 • Beepul Bharti, Mary Versa Clemens-Sewall, Paul H. Yi, Jeremias Sulam
Additionally, we show that adjusting models to satisfy multiaccuracy and multicalibration across proxy-sensitive attributes can significantly mitigate these violations for the true, but unknown, sensitive groups.
no code implementations • 28 Feb 2025 • Jacopo Teneggi, J Webster Stayman, Jeremias Sulam
Uncertainty quantification is necessary for developers, physicians, and regulatory agencies to build trust in machine learning predictors and improve patient care.
no code implementations • 30 Jan 2025 • Ramchandran Muthukumar, Ambar Pal, Jeremias Sulam, Rene Vidal
The proposed threat model measures the threat of a perturbation via its alignment with \textit{unsafe directions}, defined as directions in the input space along which a perturbation of sufficient magnitude changes the ground truth class label.
no code implementations • 30 Sep 2024 • Beepul Bharti, Paul Yi, Jeremias Sulam
We demonstrate how these two types of explanations, albeit intuitive and simple, can fall short in providing a complete picture of which features a model finds important.
no code implementations • 23 Jun 2024 • Nelson Goldenstein, Jeremias Sulam, Yaniv Romano
Leveraging the transform modeling interpretation, we propose an optimization problem that leads to a predictive model invariant to the noise level at test time.
1 code implementation • 29 May 2024 • Jacopo Teneggi, Jeremias Sulam
Recent works have extended notions of feature importance to semantic concepts that are inherently interpretable to the users interacting with a black-box predictive model.
no code implementations • 23 May 2024 • Ambar Pal, René Vidal, Jeremias Sulam
Recent work in adversarial robustness suggests that natural data distributions are localized, i. e., they place high probability in small volume regions of the input space, and that this property can be utilized for designing classifiers with improved robustness guarantees for $\ell_2$-bounded perturbations.
1 code implementation • 22 Oct 2023 • Zhenghan Fang, Sam Buchanan, Jeremias Sulam
Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed.
no code implementations • NeurIPS 2023 • Ambar Pal, Jeremias Sulam, René Vidal
The susceptibility of modern machine learning classifiers to adversarial examples has motivated theoretical results suggesting that these might be unavoidable.
no code implementations • 1 Jul 2023 • Ramchandran Muthukumar, Jeremias Sulam
In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations.
1 code implementation • 8 May 2023 • Ambar Pal, Jeremias Sulam
This method relies on taking a majority vote of any base classifier over multiple noise-perturbed inputs to obtain a smoothed classifier, and it remains the tool of choice to certify deep and complex neural network models.
1 code implementation • 7 Feb 2023 • Jacopo Teneggi, Matthew Tivnan, J. Webster Stayman, Jeremias Sulam
Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks.
1 code implementation • 29 Nov 2022 • Jacopo Teneggi, Paul H. Yi, Jeremias Sulam
We find that strong supervision (i. e., learning with local image-level annotations) and weak supervision (i. e., learning with only global examination-level labels) achieve comparable performance in examination-level hemorrhage detection (the task of selecting the images in an examination that show signs of hemorrhage) as well as in image-level hemorrhage detection (highlighting those signs within the selected images).
no code implementations • 9 Sep 2022 • Zhenghan Fang, Kuo-Wei Lai, Peter van Zijl, Xu Li, Jeremias Sulam
Experimental results using both simulation and in vivo human data demonstrate great improvement over state-of-the-art algorithms in terms of the reconstructed tensor image, principal eigenvector maps and tractography results, while allowing for tensor reconstruction with MR phase measured at much less than six different orientations.
1 code implementation • 14 Jul 2022 • Jacopo Teneggi, Beepul Bharti, Yaniv Romano, Jeremias Sulam
As a result, we further our understanding of Shapley-based explanation methods from a novel perspective and characterize the conditions under which one can make statistically valid claims about feature importance via the Shapley value.
no code implementations • 26 Feb 2022 • Ramchandran Muthukumar, Jeremias Sulam
This work studies the adversarial robustness of parametric functions composed of a linear predictor and a non-linear representation map.
no code implementations • 8 Feb 2022 • Joshua Agterberg, Jeremias Sulam
Sparse Principal Component Analysis (PCA) is a prevalent tool across a plethora of subfields of applied statistics.
1 code implementation • 14 Dec 2021 • Jeffrey A. Ruffolo, Jeffrey J. Gray, Jeremias Sulam
Understanding the composition of an individual's immune repertoire can provide insights into this process and reveal potential therapeutic antibodies.
1 code implementation • 22 Sep 2021 • Zhenzhen Wang, Carla Saoud, Sintawat Wangsiricharoen, Aaron W. James, Aleksander S. Popel, Jeremias Sulam
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
1 code implementation • NeurIPS 2021 • Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu
In contrast to existing landscape analysis for deep neural networks which is often disconnected from practice, our analysis of the simplified model not only does it explain what kind of features are learned in the last layer, but it also shows why they can be efficiently optimized in the simplified settings, matching the empirical observations in practical deep network architectures.
1 code implementation • 13 Apr 2021 • Jacopo Teneggi, Alexandre Luster, Jeremias Sulam
As modern complex neural networks keep breaking records and solving harder problems, their predictions also become less and less intelligible.
1 code implementation • NeurIPS 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
1 code implementation • NeurIPS 2020 • Jeremias Sulam, Ramchandran Muthukumar, Raman Arora
Several recent results provide theoretical insights into the phenomena of adversarial examples.
no code implementations • 19 Oct 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
1 code implementation • 11 Aug 2020 • Kuo-Wei Lai, Manisha Aggarwal, Peter van Zijl, Xu Li, Jeremias Sulam
More importantly, this framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements without the need for any ad-hoc rotation or re-training.
no code implementations • 16 Jul 2020 • Wenhao Gao, Sai Pooja Mahajan, Jeremias Sulam, Jeffrey J. Gray
Deep learning is catalyzing a scientific revolution fueled by big data, accessible toolkits, and powerful computational resources, impacting many fields including protein structural modeling.
no code implementations • 11 Jun 2020 • Jeremias Sulam, Chong You, Zhihui Zhu
We thoroughly demonstrate this observation in practice and provide an analysis of this phenomenon by tying recovery measures to generalization bounds.
1 code implementation • NeurIPS 2020 • Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal
Arguably, the two most popular accelerated or momentum-based optimization methods in machine learning are Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different discretizations of a particular second order differential equation with friction.
2 code implementations • 1 Nov 2018 • Ev Zisselman, Jeremias Sulam, Michael Elad
The Convolutional Sparse Coding (CSC) model has recently gained considerable traction in the signal and image processing communities.
no code implementations • 26 Jun 2018 • Dror Simon, Jeremias Sulam, Yaniv Romano, Yue M. Lu, Michael Elad
The proposed method adds controlled noise to the input and estimates a sparse representation from the perturbed signal.
2 code implementations • 2 Jun 2018 • Jeremias Sulam, Aviad Aberdam, Amir Beck, Michael Elad
Parsimonious representations are ubiquitous in modeling and processing information.
no code implementations • 29 May 2018 • Yaniv Romano, Aviad Aberdam, Jeremias Sulam, Michael Elad
Despite their impressive performance, deep convolutional neural networks (CNNs) have been shown to be sensitive to small adversarial perturbations.
no code implementations • 25 Apr 2018 • Aviad Aberdam, Jeremias Sulam, Michael Elad
The recently proposed multi-layer sparse model has raised insightful connections between sparse representations and convolutional neural networks (CNN).
no code implementations • 29 Aug 2017 • Jeremias Sulam, Vardan Papyan, Yaniv Romano, Michael Elad
We show that the training of the filters is essential to allow for non-trivial signals in the model, and we derive an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers.
1 code implementation • ICCV 2017 • Vardan Papyan, Yaniv Romano, Jeremias Sulam, Michael Elad
Convolutional Sparse Coding (CSC) is an increasingly popular model in the signal and image processing communities, tackling some of the limitations of traditional patch-based sparse representations.
no code implementations • 31 Jan 2016 • Jeremias Sulam, Boaz Ophir, Michael Zibulevsky, Michael Elad
Sparse representations has shown to be a very powerful model for real world signals, and has enabled the development of applications with notable performance.