no code implementations • 26 Feb 2022 • Ramchandran Muthukumar, Jeremias Sulam
This work studies the adversarial robustness of parametric functions composed of a linear predictor and a non-linear representation map.
no code implementations • 8 Feb 2022 • Joshua Agterberg, Jeremias Sulam
Sparse Principal Component Analysis (PCA) is a prevalent tool across a plethora of subfields of applied statistics.
no code implementations • 19 Jan 2022 • Joshua T. Vogelstein, Timothy Verstynen, Konrad P. Kording, Leyla Isik, John W. Krakauer, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Carey E. Priebe, Randal Burns, Kwame Kutten, James J. Knierim, James B. Potash, Thomas Hartung, Lena Smirnova, Paul Worley, Alena Savonenko, Ian Phillips, Michael I. Miller, Rene Vidal, Jeremias Sulam, Adam Charles, Noah J. Cowan, Maxim Bichuch, Archana Venkataraman, Chen Li, Nitish Thakor, Justus M Kebschull, Marilyn Albert, Jinchong Xu, Marshall Hussain Shuler, Brian Caffo, Tilak Ratnanather, Ali Geisa, Seung-Eon Roh, Eva Yezerets, Meghana Madhyastha, Javier J. How, Tyler M. Tomita, Jayanta Dey, Ningyuan, Huang, Jong M. Shin, Kaleab Alemayehu Kinfu, Pratik Chaudhari, Ben Baker, Anna Schapiro, Dinesh Jayaraman, Eric Eaton, Michael Platt, Lyle Ungar, Leila Wehbe, Adam Kepecs, Amy Christensen, Onyema Osuagwu, Bing Brunton, Brett Mensh, Alysson R. Muotri, Gabriel Silva, Francesca Puppo, Florian Engert, Elizabeth Hillman, Julia Brown, Chris White, Weiwei Yang
We call this 'retrospective learning'.
no code implementations • 14 Dec 2021 • Jeffrey A. Ruffolo, Jeffrey J. Gray, Jeremias Sulam
Understanding the composition of an individual's immune repertoire can provide insights into this process and reveal potential therapeutic antibodies.
1 code implementation • 22 Sep 2021 • Zhenzhen Wang, Carla Saoud, Sintawat Wangsiricharoen, Aaron W. James, Aleksander S. Popel, Jeremias Sulam
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
1 code implementation • NeurIPS 2021 • Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu
In contrast to existing landscape analysis for deep neural networks which is often disconnected from practice, our analysis of the simplified model not only does it explain what kind of features are learned in the last layer, but it also shows why they can be efficiently optimized in the simplified settings, matching the empirical observations in practical deep network architectures.
1 code implementation • 13 Apr 2021 • Jacopo Teneggi, Alexandre Luster, Jeremias Sulam
As modern complex neural networks keep breaking records and solving harder problems, their predictions also become less and less intelligible.
1 code implementation • NeurIPS 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
1 code implementation • NeurIPS 2020 • Jeremias Sulam, Ramchandran Muthukumar, Raman Arora
Several recent results provide theoretical insights into the phenomena of adversarial examples.
no code implementations • 19 Oct 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
1 code implementation • 11 Aug 2020 • Kuo-Wei Lai, Manisha Aggarwal, Peter van Zijl, Xu Li, Jeremias Sulam
More importantly, this framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements without the need for any ad-hoc rotation or re-training.
no code implementations • 16 Jul 2020 • Wenhao Gao, Sai Pooja Mahajan, Jeremias Sulam, Jeffrey J. Gray
Deep learning is catalyzing a scientific revolution fueled by big data, accessible toolkits, and powerful computational resources, impacting many fields including protein structural modeling.
no code implementations • 11 Jun 2020 • Jeremias Sulam, Chong You, Zhihui Zhu
We thoroughly demonstrate this observation in practice and provide an analysis of this phenomenon by tying recovery measures to generalization bounds.
1 code implementation • NeurIPS 2020 • Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal
Arguably, the two most popular accelerated or momentum-based optimization methods in machine learning are Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different discretizations of a particular second order differential equation with friction.
2 code implementations • 1 Nov 2018 • Ev Zisselman, Jeremias Sulam, Michael Elad
The Convolutional Sparse Coding (CSC) model has recently gained considerable traction in the signal and image processing communities.
no code implementations • 26 Jun 2018 • Dror Simon, Jeremias Sulam, Yaniv Romano, Yue M. Lu, Michael Elad
The proposed method adds controlled noise to the input and estimates a sparse representation from the perturbed signal.
2 code implementations • 2 Jun 2018 • Jeremias Sulam, Aviad Aberdam, Amir Beck, Michael Elad
Parsimonious representations are ubiquitous in modeling and processing information.
no code implementations • 29 May 2018 • Yaniv Romano, Aviad Aberdam, Jeremias Sulam, Michael Elad
Despite their impressive performance, deep convolutional neural networks (CNNs) have been shown to be sensitive to small adversarial perturbations.
no code implementations • 25 Apr 2018 • Aviad Aberdam, Jeremias Sulam, Michael Elad
The recently proposed multi-layer sparse model has raised insightful connections between sparse representations and convolutional neural networks (CNN).
no code implementations • 29 Aug 2017 • Jeremias Sulam, Vardan Papyan, Yaniv Romano, Michael Elad
We show that the training of the filters is essential to allow for non-trivial signals in the model, and we derive an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers.
1 code implementation • ICCV 2017 • Vardan Papyan, Yaniv Romano, Jeremias Sulam, Michael Elad
Convolutional Sparse Coding (CSC) is an increasingly popular model in the signal and image processing communities, tackling some of the limitations of traditional patch-based sparse representations.
no code implementations • 31 Jan 2016 • Jeremias Sulam, Boaz Ophir, Michael Zibulevsky, Michael Elad
Sparse representations has shown to be a very powerful model for real world signals, and has enabled the development of applications with notable performance.