Search Results for author: Chinmay Hegde

Found 56 papers, 12 papers with code

Smooth-Reduce: Leveraging Patches for Improved Certified Robustness

no code implementations12 May 2022 Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde

Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.

Selective Network Linearization for Efficient Private Inference

no code implementations4 Feb 2022 Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.

MDPGT: Momentum-based Decentralized Policy Gradient Tracking

1 code implementation6 Dec 2021 Zhanhong Jiang, Xian Yeow Lee, Sin Yong Tan, Kai Liang Tan, Aditya Balu, Young M. Lee, Chinmay Hegde, Soumik Sarkar

We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variance-reduction techniques and does not require large batches over iterations.

Multi-agent Reinforcement Learning Policy Gradient Methods +2

Adversarial Token Attacks on Vision Transformers

no code implementations8 Oct 2021 Ameya Joshi, Gauri Jagatap, Chinmay Hegde

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.

Differentiable Spline Approximations

no code implementations NeurIPS 2021 Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde

Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.

3D Point Cloud Reconstruction Point cloud reconstruction +1

NeuFENet: Neural Finite Element Solutions with Theoretical Bounds for Parametric PDEs

no code implementations4 Oct 2021 Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian

We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs).

Sphynx: ReLU-Efficient Network Design for Private Inference

no code implementations17 Jun 2021 Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde

The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.

Provably Convergent Algorithms for Solving Inverse Problems Using Generative Models

no code implementations13 May 2021 Viraj Shah, Rakib Hyder, M. Salman Asif, Chinmay Hegde

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by deep generative networks).

NURBS-Diff: A Differentiable Programming Module for NURBS

no code implementations29 Apr 2021 Anjana Deva Prasad, Aditya Balu, Harshil Shah, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy

These derivatives are used to define an approximate Jacobian used for performing the "backward" evaluation to train the deep learning models.

Point cloud reconstruction

Cross-Gradient Aggregation for Decentralized Learning from Non-IID data

1 code implementation2 Mar 2021 Yasaman Esfandiari, Sin Yong Tan, Zhanhong Jiang, Aditya Balu, Ethan Herron, Chinmay Hegde, Soumik Sarkar

Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i. e., derivatives of its model with respect to its neighbors' datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP).

Continual Learning

Provable Compressed Sensing with Generative Priors via Langevin Dynamics

no code implementations25 Feb 2021 Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde

Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.

Super-Resolution

Deep Generative Models that Solve PDEs: Distributed Computing for Training Large Data-Free Models

no code implementations24 Jul 2020 Sergio Botelho, Ameya Joshi, Biswajit Khara, Soumik Sarkar, Chinmay Hegde, Santi Adavani, Baskar Ganapathysubramanian

Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements.

Distributed Computing

Hyperparameter Optimization in Neural Networks via Structured Sparse Recovery

no code implementations7 Jul 2020 Minsu Cho, Mohammadreza Soltani, Chinmay Hegde

In this paper, we study two important problems in the automated design of neural networks -- Hyper-parameter Optimization (HPO), and Neural Architecture Search (NAS) -- through the lens of sparse recovery methods.

Hyperparameter Optimization Neural Architecture Search

ESPN: Extremely Sparse Pruned Networks

1 code implementation28 Jun 2020 Minsu Cho, Ameya Joshi, Chinmay Hegde

Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems.

Network Pruning

Benefits of Jointly Training Autoencoders: An Improved Neural Tangent Kernel Analysis

no code implementations27 Nov 2019 Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

Starting from a randomly initialized autoencoder network, we rigorously prove the linear convergence of gradient descent in two learning regimes, namely: (i) the weakly-trained regime where only the encoder is trained, and (ii) the jointly-trained regime where both the encoder and the decoder are trained.

On Higher-order Moments in Adam

no code implementations15 Oct 2019 Zhanhong Jiang, Aditya Balu, Sin Yong Tan, Young M. Lee, Chinmay Hegde, Soumik Sarkar

In this paper, we investigate the popular deep learning optimization routine, Adam, from the perspective of statistical moments.

Surrogate-Based Constrained Langevin Sampling With Applications to Optimal Material Configuration Design

no code implementations25 Sep 2019 Thanh V Nguyen, Youssef Mroueh, Samuel C. Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde

We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied.

Phase Retrieval using Untrained Neural Network Priors

no code implementations NeurIPS Workshop Deep_Invers 2019 Gauri Jagatap, Chinmay Hegde

Untrained deep neural networks as image priors have been recently introduced for linear inverse imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over hand-crafted image priors such as sparsity.

Compressive Sensing Denoising +1

Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents

1 code implementation5 Sep 2019 Xian Yeow Lee, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, Soumik Sarkar

In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack.

reinforcement-learning

Algorithmic Guarantees for Inverse Imaging with Untrained Network Priors

2 code implementations NeurIPS 2019 Gauri Jagatap, Chinmay Hegde

Specifically, we consider the problem of solving linear inverse problems, such as compressive sensing, as well as non-linear problems, such as compressive phase retrieval.

Compressive Sensing Denoising +1

Encoding Invariances in Deep Generative Models

no code implementations4 Jun 2019 Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions.

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

1 code implementation ICCV 2019 Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde

We propose a novel approach to generate such `semantic' adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model.

A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations

no code implementations11 Apr 2019 Chinmay Hegde, Fritz Keinert, Eric S. Weber

We introduce a modified Kaczmarz algorithm for solving systems of linear equations in a distributed environment, i. e. the equations within the system are distributed over multiple nodes within a network.

Alternating Phase Projected Gradient Descent with Generative Priors for Solving Compressive Phase Retrieval

no code implementations7 Mar 2019 Rakib Hyder, Viraj Shah, Chinmay Hegde, M. Salman Asif

We empirically show that the performance of our method with projected gradient descent is superior to the existing approach for solving phase retrieval under generative priors.

Signal Reconstruction from Modulo Observations

1 code implementation3 Dec 2018 Viraj Shah, Chinmay Hegde

We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements).

Physics-aware Deep Generative Models for Creating Synthetic Microstructures

no code implementations21 Nov 2018 Rahul Singh, Viraj Shah, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

The first model is a WGAN model that uses a finite number of training images to synthesize new microstructures that weakly satisfy the physical invariances respected by the original data.

Stochastic Optimization

Algorithmic Aspects of Inverse Problems Using Generative Models

no code implementations8 Oct 2018 Chinmay Hegde

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by generative adversarial networks, or GANs).

Learning ReLU Networks via Alternating Minimization

no code implementations20 Jun 2018 Gauri Jagatap, Chinmay Hegde

We propose and analyze a new family of algorithms for training neural networks with ReLU activations.

Autoencoders Learn Generative Linear Models

no code implementations2 Jun 2018 Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

For each of these models, we prove that under suitable choices of hyperparameters, architectures, and initialization, autoencoders learned by gradient descent can successfully recover the parameters of the corresponding model.

On Consensus-Optimality Trade-offs in Collaborative Deep Learning

no code implementations30 May 2018 Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality.

On Learning Sparsely Used Dictionaries from Incomplete Samples

no code implementations ICML 2018 Thanh V. Nguyen, Akshay Soni, Chinmay Hegde

Second, we propose an initialization algorithm that utilizes a small number of extra fully observed samples to produce such a coarse initial estimate.

Dictionary Learning

Solving Linear Inverse Problems Using GAN Priors: An Algorithm with Provable Guarantees

1 code implementation23 Feb 2018 Viraj Shah, Chinmay Hegde

In this work, we advocate the idea of replacing hand-crafted priors, such as sparsity, with a Generative Adversarial Network (GAN) to solve linear inverse problems such as compressive sensing.

Compressive Sensing

Fast Low-Rank Matrix Estimation without the Condition Number

no code implementations8 Dec 2017 Mohammadreza Soltani, Chinmay Hegde

In this paper, we provide a novel algorithmic framework that achieves the best of both worlds: asymptotically as fast as factorization methods, while requiring no dependency on the condition number.

Fast, Sample-Efficient Algorithms for Structured Phase Retrieval

no code implementations NeurIPS 2017 Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n).

A Forward-Backward Approach for Visualizing Information Flow in Deep Networks

no code implementations16 Nov 2017 Aditya Balu, Thanh V. Nguyen, Apurva Kokate, Chinmay Hegde, Soumik Sarkar

We introduce a new, systematic framework for visualizing information flow in deep networks.

Provably Accurate Double-Sparse Coding

1 code implementation9 Nov 2017 Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees.

Demixing Structured Superposition Signals from Periodic and Aperiodic Nonlinear Observations

no code implementations8 Aug 2017 Mohammadreza Soltani, Chinmay Hegde

We consider the demixing problem of two (or more) structured high-dimensional vectors from a limited number of nonlinear observations where this nonlinearity is due to either a periodic or an aperiodic function.

Fast Algorithms for Learning Latent Variables in Graphical Models

no code implementations27 Jun 2017 Mohammadreza Soltani, Chinmay Hegde

Existing methods for this problem assume that the precision matrix of the observed variables is the superposition of a sparse and a low-rank component.

Collaborative Deep Learning in Fixed Topology Networks

no code implementations NeurIPS 2017 Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar

There is significant recent interest to parallelize deep learning algorithms in order to handle the enormous growth in data and model sizes.

Improved Algorithms for Matrix Recovery from Rank-One Projections

no code implementations21 May 2017 Mohammadreza Soltani, Chinmay Hegde

We consider the problem of estimation of a low-rank matrix from a limited number of noisy rank-one projections.

Sample-Efficient Algorithms for Recovering Structured Signals from Magnitude-Only Measurements

1 code implementation18 May 2017 Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log n)$.

Iterative Thresholding for Demixing Structured Superpositions in High Dimensions

no code implementations23 Jan 2017 Mohammadreza Soltani, Chinmay Hegde

Specifically, we show that for certain types of structured superposition models, our method provably recovers the components given merely $n = \mathcal{O}(s)$ samples where $s$ denotes the number of nonzero entries in the underlying components.

Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

no code implementations23 Jan 2017 Mohammadreza Soltani, Chinmay Hegde

Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets.

Dimensionality Reduction

Fast recovery from a union of subspaces

no code implementations NeurIPS 2016 Chinmay Hegde, Piotr Indyk, Ludwig Schmidt

We address the problem of recovering a high-dimensional but structured vector from linear observations in a general setting where the vector can come from an arbitrary union of subspaces.

Compressive Sensing

Fast Algorithms for Demixing Sparse Signals from Nonlinear Observations

no code implementations3 Aug 2016 Mohammadreza Soltani, Chinmay Hegde

We study the problem of demixing a pair of sparse signals from noisy, nonlinear observations of their superposition.

Efficient Upsampling of Natural Images

no code implementations28 Feb 2015 Chinmay Hegde, Oncel Tuzel, Fatih Porikli

1) For the edge layer, we use a nonparametric approach by constructing a dictionary of patches from a given image, and synthesize edge regions in a higher-resolution version of the image.

Sparse Signal Recovery Using Markov Random Fields

no code implementations NeurIPS 2008 Volkan Cevher, Marco F. Duarte, Chinmay Hegde, Richard Baraniuk

Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals.

Compressive Sensing

Random Projections for Manifold Learning

no code implementations NeurIPS 2007 Chinmay Hegde, Michael Wakin, Richard Baraniuk

First, we show that with a small number $M$ of {\em random projections} of sample points in $\reals^N$ belonging to an unknown $K$-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.