no code implementations • 18 Nov 2022 • Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.
1 code implementation • 6 Apr 2022 • Sharath Girish, Kamal Gupta, Saurabh Singh, Abhinav Shrivastava
We introduce LilNetX, an end-to-end trainable technique for neural networks that enables learning models with specified accuracy-rate-computation trade-off.
no code implementations • 26 Apr 2021 • Thomas Bird, Johannes Ballé, Saurabh Singh, Philip A. Chou
We unify these steps by directly compressing an implicit representation of the scene, a function that maps spatial coordinates to a radiance vector field, which can then be queried to render arbitrary viewpoints.
no code implementations • 12 Sep 2020 • Mohan Krishna Kagita, Navod Thilakarathne, Thippa Reddy Gadekallu, Praveen Kumar Reddy Maddikunta, Saurabh Singh
The success of IoT cannot be ignored in the scenario today, along with its attacks and threats on IoT devices and facilities are also increasing day by day.
no code implementations • 26 Aug 2020 • Rajat Bhattacharjya, Vishesh Mishra, Saurabh Singh, Kaustav Goswami, Dip Sankar Banerjee
In this work, we propose a new approximate adder that employs a carry prediction method.
Hardware Architecture
1 code implementation • 23 Jul 2020 • Saurabh Singh, Sami Abu-El-Haija, Nick Johnston, Johannes Ballé, Abhinav Shrivastava, George Toderici
We propose a learned method that jointly optimizes for compressibility along with the task objective for learning the features.
1 code implementation • 17 Jul 2020 • David Minnen, Saurabh Singh
In learning-based approaches to image compression, codecs are developed by optimizing a computational model to minimize a rate-distortion objective.
no code implementations • CVPR 2020 • Danhang Tang, Saurabh Singh, Philip A. Chou, Christian Haene, Mingsong Dou, Sean Fanello, Jonathan Taylor, Philip Davidson, Onur G. Guleryuz, yinda zhang, Shahram Izadi, Andrea Tagliasacchi, Sofien Bouaziz, Cem Keskin
We describe a novel approach for compressing truncated signed distance fields (TSDF) stored in 3D voxel grids, and their corresponding textures.
2 code implementations • 23 Apr 2020 • Aaron Ewall-Wice, Nicholas Kern, Joshua S. Dillon, Adrian Liu, Aaron Parsons, Saurabh Singh, Adam Lanman, Paul La Plante, Nicolas Fagnoni, Eloy de Lera Acedo, David R. DeBoer, Chuneeta Nunhokee, Philip Bull, Tzu-Ching Chang, T. Joseph Lazio, James Aguirre, Sean Weinberg
We introduce DAYENU, a linear, spectral filter for HI intensity mapping that achieves the desirable foreground mitigation and error minimization properties of inverse co-variance weighting with minimal modeling of the underlying data.
Cosmology and Nongalactic Astrophysics Instrumentation and Methods for Astrophysics
1 code implementation • CVPR 2020 • Kamal Gupta, Saurabh Singh, Abhinav Shrivastava
Unsupervised representation learning holds the promise of exploiting large amounts of unlabeled data to learn general representations.
11 code implementations • CVPR 2020 • Saurabh Singh, Shankar Krishnan
Our method outperforms BN and other alternatives in a variety of settings for all batch sizes.
Ranked #656 on
Image Classification
on ImageNet
no code implementations • ICLR 2020 • Deniz Oktay, Johannes Ballé, Saurabh Singh, Abhinav Shrivastava
We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a "latent" space, amounting to a reparameterization.
no code implementations • ICCV 2019 • Saurabh Singh, Abhinav Shrivastava
Batch normalization (BN) has been very effective for deep learning and is widely used.
no code implementations • 31 May 2018 • David Minnen, George Toderici, Saurabh Singh, Sung Jin Hwang, Michele Covell
The leading approach for image compression with artificial neural networks (ANNs) is to learn a nonlinear transform and a fixed entropy model that are optimized for rate-distortion performance.
no code implementations • 7 Feb 2018 • David Minnen, George Toderici, Michele Covell, Troy Chinen, Nick Johnston, Joel Shor, Sung Jin Hwang, Damien Vincent, Saurabh Singh
Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images.
13 code implementations • ICLR 2018 • Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
We describe an end-to-end trainable model for image compression based on variational autoencoders.
no code implementations • EMNLP 2017 • Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Abhipsa Basu, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee
Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts.
no code implementations • 25 Jul 2017 • Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Abhipsa Basu, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee
Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts.
2 code implementations • ICCV 2017 • Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta
What will happen if we increase the dataset size by 10x or 100x?
Ranked #2 on
Semantic Segmentation
on PASCAL VOC 2007
no code implementations • 18 May 2017 • Michele Covell, Nick Johnston, David Minnen, Sung Jin Hwang, Joel Shor, Saurabh Singh, Damien Vincent, George Toderici
Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructions in areas around stop-code masking as well as in highly-detailed areas.
no code implementations • ICCV 2017 • Tanmay Gupta, Kevin Shih, Saurabh Singh, Derek Hoiem
In this paper, we investigate a vision-language embedding as a core representation and show that it leads to better cross-task transfer than standard multi-task learning.
no code implementations • CVPR 2018 • Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, George Toderici
We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM.
2 code implementations • ICCV 2017 • Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, Saurabh Singh
Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship -- an anchor point $x$ is similar to a set of positive points $Y$, and dissimilar to a set of negative points $Z$, and a loss defined over these distances is minimized.
no code implementations • 15 Mar 2017 • Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee
We first propose context based clustering method to sample a set of candidate words from the social media data. Next, we propose three novel and similar metrics based on the usage of these words by the users in different tweets; these metrics were used to score and rank the candidate words indicating their borrowed likeliness.
no code implementations • CVPR 2016 • Saurabh Singh, Derek Hoiem, David Forsyth
We describe a method to find such landmarks by finding a sequence of latent landmarks, each with a prediction model.
no code implementations • NeurIPS 2016 • Saurabh Singh, Derek Hoiem, David Forsyth
When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers.
no code implementations • CVPR 2016 • Kevin J. Shih, Saurabh Singh, Derek Hoiem
We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query.
no code implementations • 22 Jul 2015 • Kevin J. Shih, Arun Mallya, Saurabh Singh, Derek Hoiem
We present a simple deep learning framework to simultaneously predict keypoint locations and their respective visibilities and use those to achieve state-of-the-art performance for fine-grained classification.
no code implementations • CVPR 2015 • Saurabh Singh, Derek Hoiem, David Forsyth
We propose a general method to find landmarks in images of objects using both appearance and spatial context.