no code implementations • ICML 2020 • Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Nir Shavit, Dan Alistarh
In this paper, we present an in-depth analysis of methods for maximizing the sparsity of the activations in a trained neural network, and show that, when coupled with an efficient sparse-input convolution algorithm, we can leverage this sparsity for significant performance gains.
no code implementations • 2 Mar 2023 • Yicong Li, Yaron Meirovitch, Aaron T. Kuan, Jasper S. Phelps, Alexandra Pacureanu, Wei-Chung Allen Lee, Nir Shavit, Lu Mi
Comprehensive, synapse-resolution imaging of the brain will be crucial for understanding neuronal computations and function.
no code implementations • 14 Feb 2023 • Tony T. Wang, Igor Zablotchi, Nir Shavit, Jonathan S. Rosenfeld
We conduct an in-depth investigation of foundation-model cliff-learning and study toy models of the phenomenon.
no code implementations • 8 Feb 2023 • Tri Nguyen, Mukul Narwani, Mark Larson, Yicong Li, Shuhan Xie, Hanspeter Pfister, Donglai Wei, Nir Shavit, Lu Mi, Alexandra Pacureanu, Wei-Chung Lee, Aaron T. Kuan
In this task, we provide volumetric XNH images of cortical white matter axons from the mouse brain along with ground truth annotations for axon trajectories.
1 code implementation • 13 Oct 2021 • Lu Mi, Tianxing He, Core Francisco Park, Hao Wang, Yue Wang, Nir Shavit
In this work, we show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space interpolation algorithms, for variational autoencoders.
no code implementations • ICLR 2022 • Lu Mi, Richard Xu, Sridhama Prakhya, Albert Lin, Nir Shavit, Aravinthan Samuel, Srinivas C Turaga
Brain-wide measurements of activity and anatomical connectivity of the $\textit{C. elegans}$ nervous system in principle allow for the development of detailed mechanistic computational models.
no code implementations • CVPR 2021 • Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov
To address this issue, we introduce a new challenging task to generate HD maps.
1 code implementation • 7 Jan 2021 • Lu Mi, Hao Wang, Yaron Meirovitch, Richard Schalek, Srinivas C. Turaga, Jeff W. Lichtman, Aravinthan D. T. Samuel, Nir Shavit
Single-beam scanning electron microscopes (SEM) are widely used to acquire massive data sets for biomedical study, material analysis, and fabrication inspection.
no code implementations • 18 Jun 2020 • Jonathan S. Rosenfeld, Jonathan Frankle, Michael Carbin, Nir Shavit
We show that the error of iteratively magnitude-pruned networks empirically follows a scaling law with interpretable coefficients that depend on the architecture and task.
no code implementations • 4 Dec 2019 • Rati Gelashvili, Nir Shavit, Aleksandar Zlateski
Fast convolutions via transforms, either Winograd or FFT, had emerged as a preferred way of performing the computation of convolutional layers, as it greatly reduces the number of required operations.
no code implementations • 28 Sep 2019 • Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit
Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas.
no code implementations • ICLR 2020 • Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, Nir Shavit
In this work, we present a functional form which approximates well the generalization error in practice.
no code implementations • CVPR 2019 • Yaron Meirovitch, Lu Mi, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit
Pixel-accurate tracking of objects is a key element in many computer vision applications, often solved by iterated individual object tracking or instance segmentation followed by object matching.
no code implementations • ICLR 2018 • David Rolnick, Andreas Veit, Serge Belongie, Nir Shavit
Deep neural networks trained on large supervised datasets have led to impressive results in image classification and other tasks.
no code implementations • 30 May 2017 • David Rolnick, Yaron Meirovitch, Toufiq Parag, Hanspeter Pfister, Viren Jain, Jeff W. Lichtman, Edward S. Boyden, Nir Shavit
Deep learning algorithms for connectomics rely upon localized classification, rather than overall morphology.
no code implementations • 4 Mar 2017 • Shibani Santurkar, David Budden, Nir Shavit
Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed.
no code implementations • 23 Feb 2017 • Shibani Santurkar, David Budden, Alexander Matveev, Heather Berlin, Hayk Saribekyan, Yaron Meirovitch, Nir Shavit
Connectomics is an emerging field in neuroscience that aims to reconstruct the 3-dimensional morphology of neurons from electron microscopy (EM) images.
no code implementations • 7 Dec 2016 • Yaron Meirovitch, Alexander Matveev, Hayk Saribekyan, David Budden, David Rolnick, Gergely Odor, Seymour Knowles-Barley, Thouis Raymond Jones, Hanspeter Pfister, Jeff William Lichtman, Nir Shavit
The field of connectomics faces unprecedented "big data" challenges.
no code implementations • ICML 2017 • David Budden, Alexander Matveev, Shibani Santurkar, Shraman Ray Chaudhuri, Nir Shavit
Deep convolutional neural networks (ConvNets) of 3-dimensional kernels allow joint modeling of spatiotemporal features.