You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 12 May 2022 • Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde

Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.

no code implementations • 4 Feb 2022 • Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.

1 code implementation • 6 Dec 2021 • Zhanhong Jiang, Xian Yeow Lee, Sin Yong Tan, Kai Liang Tan, Aditya Balu, Young M. Lee, Chinmay Hegde, Soumik Sarkar

We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variance-reduction techniques and does not require large batches over iterations.

Multi-agent Reinforcement Learning
Policy Gradient Methods
**+2**

no code implementations • 8 Oct 2021 • Ameya Joshi, Gauri Jagatap, Chinmay Hegde

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.

no code implementations • NeurIPS 2021 • Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde

Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.

no code implementations • 4 Oct 2021 • Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian

We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs).

1 code implementation • NeurIPS 2021 • Jiangyuan Li, Thanh V. Nguyen, Chinmay Hegde, Raymond K. W. Wong

In this paper, we study the implicit bias of gradient descent for sparse regression.

no code implementations • 17 Jun 2021 • Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde

The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.

no code implementations • 13 May 2021 • Viraj Shah, Rakib Hyder, M. Salman Asif, Chinmay Hegde

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by deep generative networks).

no code implementations • 29 Apr 2021 • Aditya Balu, Sergio Botelho, Biswajit Khara, Vinay Rao, Chinmay Hegde, Soumik Sarkar, Santi Adavani, Adarsh Krishnamurthy, Baskar Ganapathysubramanian

We specifically consider neural solvers for the generalized 3D Poisson equation over megavoxel domains.

no code implementations • 29 Apr 2021 • Anjana Deva Prasad, Aditya Balu, Harshil Shah, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy

These derivatives are used to define an approximate Jacobian used for performing the "backward" evaluation to train the deep learning models.

1 code implementation • 2 Mar 2021 • Yasaman Esfandiari, Sin Yong Tan, Zhanhong Jiang, Aditya Balu, Ethan Herron, Chinmay Hegde, Soumik Sarkar

Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i. e., derivatives of its model with respect to its neighbors' datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP).

no code implementations • 25 Feb 2021 • Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde

Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.

no code implementations • NeurIPS Workshop LMCA 2020 • Minsu Cho, Ameya Joshi, Xian Yeow Lee, Aditya Balu, Adarsh Krishnamurthy, Baskar Ganapathysubramanian, Soumik Sarkar, Chinmay Hegde

The paradigm of differentiable programming has considerably enhanced the scope of machine learning via the judicious use of gradient-based optimization.

no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.

no code implementations • 24 Jul 2020 • Sergio Botelho, Ameya Joshi, Biswajit Khara, Soumik Sarkar, Chinmay Hegde, Santi Adavani, Baskar Ganapathysubramanian

Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements.

no code implementations • 7 Jul 2020 • Minsu Cho, Mohammadreza Soltani, Chinmay Hegde

In this paper, we study two important problems in the automated design of neural networks -- Hyper-parameter Optimization (HPO), and Neural Architecture Search (NAS) -- through the lens of sparse recovery methods.

1 code implementation • 28 Jun 2020 • Minsu Cho, Ameya Joshi, Chinmay Hegde

Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems.

no code implementations • ICLR Workshop DeepDiffEq 2019 • Thanh V. Nguyen, Youssef Mroueh, Samuel Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde

We consider the problem of optimizing by sampling under multiple black-box constraints in nano-material design.

no code implementations • 27 Nov 2019 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

Starting from a randomly initialized autoencoder network, we rigorously prove the linear convergence of gradient descent in two learning regimes, namely: (i) the weakly-trained regime where only the encoder is trained, and (ii) the jointly-trained regime where both the encoder and the decoder are trained.

no code implementations • 15 Oct 2019 • Zhanhong Jiang, Aditya Balu, Sin Yong Tan, Young M. Lee, Chinmay Hegde, Soumik Sarkar

In this paper, we investigate the popular deep learning optimization routine, Adam, from the perspective of statistical moments.

no code implementations • 25 Sep 2019 • Thanh V Nguyen, Youssef Mroueh, Samuel C. Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde

We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied.

no code implementations • NeurIPS Workshop Deep_Invers 2019 • Gauri Jagatap, Chinmay Hegde

Untrained deep neural networks as image priors have been recently introduced for linear inverse imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over hand-crafted image priors such as sparsity.

1 code implementation • 5 Sep 2019 • Xian Yeow Lee, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, Soumik Sarkar

In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack.

2 code implementations • NeurIPS 2019 • Gauri Jagatap, Chinmay Hegde

Specifically, we consider the problem of solving linear inverse problems, such as compressive sensing, as well as non-linear problems, such as compressive phase retrieval.

1 code implementation • 7 Jun 2019 • Minsu Cho, Mohammadreza Soltani, Chinmay Hegde

Neural Architecture Search remains a very challenging meta-learning problem.

no code implementations • 4 Jun 2019 • Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions.

no code implementations • 24 Apr 2019 • Minsu Cho, Chinmay Hegde

We propose a new algorithm for hyperparameter selection in machine learning algorithms.

1 code implementation • ICCV 2019 • Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde

We propose a novel approach to generate such `semantic' adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model.

no code implementations • 11 Apr 2019 • Chinmay Hegde, Fritz Keinert, Eric S. Weber

We introduce a modified Kaczmarz algorithm for solving systems of linear equations in a distributed environment, i. e. the equations within the system are distributed over multiple nodes within a network.

no code implementations • 7 Mar 2019 • Rakib Hyder, Viraj Shah, Chinmay Hegde, M. Salman Asif

We empirically show that the performance of our method with projected gradient descent is superior to the existing approach for solving phase retrieval under generative priors.

1 code implementation • 3 Dec 2018 • Viraj Shah, Chinmay Hegde

We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements).

no code implementations • 21 Nov 2018 • Rahul Singh, Viraj Shah, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

The first model is a WGAN model that uses a finite number of training images to synthesize new microstructures that weakly satisfy the physical invariances respected by the original data.

no code implementations • 8 Oct 2018 • Chinmay Hegde

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by generative adversarial networks, or GANs).

no code implementations • 20 Jun 2018 • Gauri Jagatap, Chinmay Hegde

We propose and analyze a new family of algorithms for training neural networks with ReLU activations.

no code implementations • 2 Jun 2018 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

For each of these models, we prove that under suitable choices of hyperparameters, architectures, and initialization, autoencoders learned by gradient descent can successfully recover the parameters of the corresponding model.

no code implementations • 30 May 2018 • Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality.

no code implementations • ICML 2018 • Thanh V. Nguyen, Akshay Soni, Chinmay Hegde

Second, we propose an initialization algorithm that utilizes a small number of extra fully observed samples to produce such a coarse initial estimate.

1 code implementation • 23 Feb 2018 • Viraj Shah, Chinmay Hegde

In this work, we advocate the idea of replacing hand-crafted priors, such as sparsity, with a Generative Adversarial Network (GAN) to solve linear inverse problems such as compressive sensing.

no code implementations • 8 Dec 2017 • Mohammadreza Soltani, Chinmay Hegde

In this paper, we provide a novel algorithmic framework that achieves the best of both worlds: asymptotically as fast as factorization methods, while requiring no dependency on the condition number.

no code implementations • NeurIPS 2017 • Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n).

no code implementations • 16 Nov 2017 • Aditya Balu, Thanh V. Nguyen, Apurva Kokate, Chinmay Hegde, Soumik Sarkar

We introduce a new, systematic framework for visualizing information flow in deep networks.

1 code implementation • 9 Nov 2017 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees.

no code implementations • 29 Sep 2017 • Viraj Shah, Mohammadreza Soltani, Chinmay Hegde

We consider the problem of reconstructing signals and images from periodic nonlinearities.

no code implementations • 8 Aug 2017 • Mohammadreza Soltani, Chinmay Hegde

We consider the demixing problem of two (or more) structured high-dimensional vectors from a limited number of nonlinear observations where this nonlinearity is due to either a periodic or an aperiodic function.

no code implementations • 27 Jun 2017 • Mohammadreza Soltani, Chinmay Hegde

Existing methods for this problem assume that the precision matrix of the observed variables is the superposition of a sparse and a low-rank component.

no code implementations • NeurIPS 2017 • Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar

There is significant recent interest to parallelize deep learning algorithms in order to handle the enormous growth in data and model sizes.

no code implementations • 21 May 2017 • Mohammadreza Soltani, Chinmay Hegde

We consider the problem of estimation of a low-rank matrix from a limited number of noisy rank-one projections.

1 code implementation • 18 May 2017 • Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log n)$.

no code implementations • 23 Jan 2017 • Mohammadreza Soltani, Chinmay Hegde

Specifically, we show that for certain types of structured superposition models, our method provably recovers the components given merely $n = \mathcal{O}(s)$ samples where $s$ denotes the number of nonzero entries in the underlying components.

no code implementations • 23 Jan 2017 • Mohammadreza Soltani, Chinmay Hegde

Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets.

no code implementations • NeurIPS 2016 • Chinmay Hegde, Piotr Indyk, Ludwig Schmidt

We address the problem of recovering a high-dimensional but structured vector from linear observations in a general setting where the vector can come from an arbitrary union of subspaces.

no code implementations • 3 Aug 2016 • Mohammadreza Soltani, Chinmay Hegde

We study the problem of demixing a pair of sparse signals from noisy, nonlinear observations of their superposition.

no code implementations • 28 Feb 2015 • Chinmay Hegde, Oncel Tuzel, Fatih Porikli

1) For the edge layer, we use a nonparametric approach by constructing a dictionary of patches from a given image, and synthesize edge regions in a higher-resolution version of the image.

no code implementations • NeurIPS 2008 • Volkan Cevher, Marco F. Duarte, Chinmay Hegde, Richard Baraniuk

Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals.

no code implementations • NeurIPS 2007 • Chinmay Hegde, Michael Wakin, Richard Baraniuk

First, we show that with a small number $M$ of {\em random projections} of sample points in $\reals^N$ belonging to an unknown $K$-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.