Search Results for author: Saurabh Singh

Found 28 papers, 8 papers with code

LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification

1 code implementation6 Apr 2022 Sharath Girish, Kamal Gupta, Saurabh Singh, Abhinav Shrivastava

We introduce LilNetX, an end-to-end trainable technique for neural networks that enables learning models with specified accuracy-rate-computation trade-off.

Model Compression

3D Scene Compression through Entropy Penalized Neural Representation Functions

no code implementations26 Apr 2021 Thomas Bird, Johannes Ballé, Saurabh Singh, Philip A. Chou

We unify these steps by directly compressing an implicit representation of the scene, a function that maps spatial coordinates to a radiance vector field, which can then be queried to render arbitrary viewpoints.

A Review on Cyber Crimes on the Internet of Things

no code implementations12 Sep 2020 Mohan Krishna Kagita, Navod Thilakarathne, Thippa Reddy Gadekallu, Praveen Kumar Reddy Maddikunta, Saurabh Singh

The success of IoT cannot be ignored in the scenario today, along with its attacks and threats on IoT devices and facilities are also increasing day by day.

An Approximate Carry Estimating Simultaneous Adder with Rectification

no code implementations26 Aug 2020 Rajat Bhattacharjya, Vishesh Mishra, Saurabh Singh, Kaustav Goswami, Dip Sankar Banerjee

In this work, we propose a new approximate adder that employs a carry prediction method.

Hardware Architecture

End-to-end Learning of Compressible Features

1 code implementation23 Jul 2020 Saurabh Singh, Sami Abu-El-Haija, Nick Johnston, Johannes Ballé, Abhinav Shrivastava, George Toderici

We propose a learned method that jointly optimizes for compressibility along with the task objective for learning the features.


Channel-wise Autoregressive Entropy Models for Learned Image Compression

no code implementations17 Jul 2020 David Minnen, Saurabh Singh

In learning-based approaches to image compression, codecs are developed by optimizing a computational model to minimize a rate-distortion objective.

Image Compression

DAYENU: A Simple Filter of Smooth Foregrounds for Intensity Mapping Power Spectra

2 code implementations23 Apr 2020 Aaron Ewall-Wice, Nicholas Kern, Joshua S. Dillon, Adrian Liu, Aaron Parsons, Saurabh Singh, Adam Lanman, Paul La Plante, Nicolas Fagnoni, Eloy de Lera Acedo, David R. DeBoer, Chuneeta Nunhokee, Philip Bull, Tzu-Ching Chang, T. Joseph Lazio, James Aguirre, Sean Weinberg

We introduce DAYENU, a linear, spectral filter for HI intensity mapping that achieves the desirable foreground mitigation and error minimization properties of inverse co-variance weighting with minimal modeling of the underlying data.

Cosmology and Nongalactic Astrophysics Instrumentation and Methods for Astrophysics

PatchVAE: Learning Local Latent Codes for Recognition

1 code implementation CVPR 2020 Kamal Gupta, Saurabh Singh, Abhinav Shrivastava

Unsupervised representation learning holds the promise of exploiting large amounts of unlabeled data to learn general representations.

Representation Learning

Scalable Model Compression by Entropy Penalized Reparameterization

no code implementations ICLR 2020 Deniz Oktay, Johannes Ballé, Saurabh Singh, Abhinav Shrivastava

We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a "latent" space, amounting to a reparameterization.

General Classification Model Compression

Image-Dependent Local Entropy Models for Learned Image Compression

no code implementations31 May 2018 David Minnen, George Toderici, Saurabh Singh, Sung Jin Hwang, Michele Covell

The leading approach for image compression with artificial neural networks (ANNs) is to learn a nonlinear transform and a fixed entropy model that are optimized for rate-distortion performance.

Image Compression

Spatially adaptive image compression using a tiled deep network

no code implementations7 Feb 2018 David Minnen, George Toderici, Michele Covell, Troy Chinen, Nick Johnston, Joel Shor, Sung Jin Hwang, Damien Vincent, Saurabh Singh

Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images.

Image Compression

Target-Quality Image Compression with Recurrent, Convolutional Neural Networks

no code implementations18 May 2017 Michele Covell, Nick Johnston, David Minnen, Sung Jin Hwang, Joel Shor, Saurabh Singh, Damien Vincent, George Toderici

Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructions in areas around stop-code masking as well as in highly-detailed areas.

Image Compression

Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks

no code implementations ICCV 2017 Tanmay Gupta, Kevin Shih, Saurabh Singh, Derek Hoiem

In this paper, we investigate a vision-language embedding as a core representation and show that it leads to better cross-task transfer than standard multi-task learning.

Multi-Task Learning Question Answering +2

Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

no code implementations CVPR 2018 Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, George Toderici

We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM.

Image Compression MS-SSIM +1

No Fuss Distance Metric Learning using Proxies

2 code implementations ICCV 2017 Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, Saurabh Singh

Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship -- an anchor point $x$ is similar to a set of positive points $Y$, and dissimilar to a set of negative points $Z$, and a loss defined over these distances is minimized.

Metric Learning Semantic Similarity +2

Is this word borrowed? An automatic approach to quantify the likeliness of borrowing in social media

no code implementations15 Mar 2017 Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee

We first propose context based clustering method to sample a set of candidate words from the social media data. Next, we propose three novel and similar metrics based on the usage of these words by the users in different tweets; these metrics were used to score and rank the candidate words indicating their borrowed likeliness.

Learning to Localize Little Landmarks

no code implementations CVPR 2016 Saurabh Singh, Derek Hoiem, David Forsyth

We describe a method to find such landmarks by finding a sequence of latent landmarks, each with a prediction model.

Swapout: Learning an ensemble of deep architectures

no code implementations NeurIPS 2016 Saurabh Singh, Derek Hoiem, David Forsyth

When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers.

Where To Look: Focus Regions for Visual Question Answering

no code implementations CVPR 2016 Kevin J. Shih, Saurabh Singh, Derek Hoiem

We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query.

Question Answering Visual Question Answering +1

Part Localization using Multi-Proposal Consensus for Fine-Grained Categorization

no code implementations22 Jul 2015 Kevin J. Shih, Arun Mallya, Saurabh Singh, Derek Hoiem

We present a simple deep learning framework to simultaneously predict keypoint locations and their respective visibilities and use those to achieve state-of-the-art performance for fine-grained classification.

General Classification

Learning a Sequential Search for Landmarks

no code implementations CVPR 2015 Saurabh Singh, Derek Hoiem, David Forsyth

We propose a general method to find landmarks in images of objects using both appearance and spatial context.

Cannot find the paper you are looking for? You can Submit a new open access paper.