no code implementations • 5 Dec 2024 • Kasra Arabi, Benjamin Feuer, R. Teal Witter, Chinmay Hegde, Niv Cohen
For detection, we (i) retrieve the relevant group of noises, and (ii) search within the given group for an initial noise that might match our image.
no code implementations • 2 Dec 2024 • Patrick Yubeaton, Jianqiao Cambridge Mo, Karthik Garimella, Nandan Kumar Jha, Brandon Reagen, Chinmay Hegde, Siddharth Garg
Private inference (PI) serves an important role in guaranteeing the privacy of user data when interfacing with proprietary machine learning models such as LLMs.
1 code implementation • 7 Oct 2024 • Benjamin Feuer, Jiawei Xu, Niv Cohen, Patrick Yubeaton, Govind Mittal, Chinmay Hegde
In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification.
no code implementations • 26 Sep 2024 • Ronak Tali, Ali Rabeh, Cheng-Hau Yang, Mehdi Shadkhah, Samundra Karki, Abhisek Upadhyaya, Suriya Dhakshinamoorthy, Marjan Saadati, Soumik Sarkar, Adarsh Krishnamurthy, Chinmay Hegde, Aditya Balu, Baskar Ganapathysubramanian
FlowBench contains over 10K data samples, with each sample the outcome of a fully resolved, direct numerical simulation using a well-validated simulator framework designed for modeling transport phenomena in complex geometries.
1 code implementation • 3 Sep 2024 • Sanjita Prajapati, Tanu Singh, Chinmay Hegde, Pranamesh Chakraborty
In this study, we have explored state-of-the-art VLM models for vision-based transportation engineering tasks such as image classification and object detection.
no code implementations • 29 Jul 2024 • Muhammad Arbab Arshad, Talukder Zaki Jubery, Tirtho Roy, Rim Nassiri, Asheesh K. Singh, Arti Singh, Chinmay Hegde, Baskar Ganapathysubramanian, Aditya Balu, Adarsh Krishnamurthy, Soumik Sarkar
Plant stress phenotyping traditionally relies on expert assessments and specialized models, limiting scalability in agriculture.
no code implementations • 4 Jul 2024 • Anushrut Jignasu, Kelly O. Marshall, Ankush Kumar Mishra, Lucas Nerone Rillo, Baskar Ganapathysubramanian, Aditya Balu, Chinmay Hegde, Adarsh Krishnamurthy
G-code (Geometric code) or RS-274 is the most widely used computer numerical control (CNC) and 3D printing programming language.
1 code implementation • 27 Jun 2024 • Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Siddartha Naidu, Chinmay Hegde, Yann Lecun, Tom Goldstein, Willie Neiswanger, Micah Goldblum
In this work, we introduce a new benchmark for LLMs designed to be immune to both test set contamination and the pitfalls of LLM judging and human crowdsourcing.
1 code implementation • 25 Jun 2024 • Chih-Hsuan Yang, Benjamin Feuer, Zaki Jubery, Zi K. Deng, Andre Nakkab, Md Zahid Hasan, Shivani Chiranjeevi, Kelly Marshall, Nirmal Baishnab, Asheesh K Singh, Arti Singh, Soumik Sarkar, Nirav Merchant, Chinmay Hegde, Baskar Ganapathysubramanian
We introduce Arboretum, the largest publicly accessible dataset designed to advance AI for biodiversity applications.
no code implementations • 15 May 2024 • Aarshvi Gajjar, Wai Ming Tai, Xingyu Xu, Chinmay Hegde, Yi Li, Christopher Musco
We provide two main results on agnostic active learning of single index models.
1 code implementation • CVPR 2024 • Nastaran Saadati, Minh Pham, Nasla Saleem, Joshua R. Waite, Aditya Balu, Zhanhong Jiang, Chinmay Hegde, Soumik Sarkar
This DIMAT paradigm presents a new opportunity for the future decentralized learning, enhancing its adaptability to real-world with sparse and light-weight communication and computation.
no code implementations • 4 Apr 2024 • Minh Pham, Kelly O. Marshall, Chinmay Hegde, Niv Cohen
Finally, we show that Diverse Inversion enables us to apply a TV edit only to a subset of the model weights, enhancing the erasure capabilities while better maintaining the core functionality of the model.
no code implementations • 12 Mar 2024 • Sudipta Banerjee, Sai Pranaswi Mullangi, Shruti Wagle, Chinmay Hegde, Nasir Memon
To mitigate this issue, we propose two novel techniques for local and global attribute editing.
2 code implementations • 17 Feb 2024 • Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, Colin White
Notably, TabPFN achieves very strong performance on small tabular datasets but is not designed to make predictions for datasets of size larger than 1000.
no code implementations • 17 Nov 2023 • Benjamin Feuer, Chinmay Hegde, Niv Cohen
Tabular classification has traditionally relied on supervised algorithms, which estimate the parameters of a prediction model using its training data.
no code implementations • 7 Nov 2023 • Benjamin Feuer, Chinmay Hegde
Modern computer vision foundation models are trained on massive amounts of data, incurring large economic and environmental costs.
1 code implementation • 27 Oct 2023 • Benjamin Feuer, Yurong Liu, Chinmay Hegde, Juliana Freire
We introduce ArcheType, a simple, practical method for context sampling, prompt serialization, model querying, and label remapping, which enables large language models to solve CTA problems in a fully zero-shot manner.
Ranked #1 on Column Type Annotation on WDC SOTAB (Weighted F1 metric)
1 code implementation • 6 Oct 2023 • Naren Dhyani, Jianqiao Mo, Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
The Vision Transformer (ViT) architecture has emerged as the backbone of choice for state-of-the-art deep models for computer vision applications.
no code implementations • 11 Sep 2023 • Feyza Duman Keles, Chinmay Hegde
This paper aims to provide a fine-grained view of the landscape of computational hardness for this problem.
1 code implementation • 4 Sep 2023 • Anushrut Jignasu, Kelly Marshall, Baskar Ganapathysubramanian, Aditya Balu, Chinmay Hegde, Adarsh Krishnamurthy
3D printing or additive manufacturing is a revolutionary technology that enables the creation of physical objects from digital models.
1 code implementation • 7 Aug 2023 • Benjamin Feuer, Ameya Joshi, Minh Pham, Chinmay Hegde
To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.
1 code implementation • 3 Aug 2023 • Minh Pham, Kelly O. Marshall, Niv Cohen, Govind Mittal, Chinmay Hegde
Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public.
1 code implementation • 17 Jul 2023 • Sudipta Banerjee, Govind Mittal, Ameya Joshi, Chinmay Hegde, Nasir Memon
The performance of automated face recognition systems is inevitably impacted by the facial aging process.
1 code implementation • 16 Jun 2023 • Md Zahid Hasan, Jiajing Chen, Jiyang Wang, Mohammed Shaiqur Rahman, Ameya Joshi, Senem Velipasalar, Chinmay Hegde, Anuj Sharma, Soumik Sarkar
Our results show that this framework offers state-of-the-art performance on zero-shot transfer and video-based CLIP for predicting the driver's state on two public datasets.
1 code implementation • 14 Jun 2023 • Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya Balu, Adarsh Krishnamurthy, Chinmay Hegde
Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations.
2 code implementations • NeurIPS 2023 • Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, Vishak Prasad C, Benjamin Feuer, Chinmay Hegde, Ganesh Ramakrishnan, Micah Goldblum, Colin White
To this end, we conduct the largest tabular data analysis to date, comparing 19 algorithms across 176 datasets, and we find that the 'NN vs. GBDT' debate is overemphasized: for a surprisingly high number of datasets, either the performance difference between GBDTs and NNs is negligible, or light hyperparameter tuning on a GBDT is more important than choosing between NNs and GBDTs.
1 code implementation • 12 Feb 2023 • Andre Nakkab, Benjamin Feuer, Chinmay Hegde
Recent advances in training vision-language models have demonstrated unprecedented robustness and transfer learning effectiveness; however, standard computer vision datasets are image-only, and therefore not well adapted to such training methods.
1 code implementation • 29 Jan 2023 • Jiangyuan Li, Thanh V. Nguyen, Chinmay Hegde, Raymond K. W. Wong
We study the implicit regularization of gradient descent towards structured sparsity via a novel neural reparameterization, which we call a diagonally grouped linear neural network.
no code implementations • 17 Jan 2023 • Sam Earle, Ozlem Yildiz, Julian Togelius, Chinmay Hegde
As a step toward developing such networks, we hand-code and learn models for Breadth-First Search (BFS), i. e. shortest path finding, using the unified architectural framework of Neural Cellular Automata, which are iterative neural networks with equal-size inputs and outputs.
no code implementations • 7 Nov 2022 • Biswajit Khara, Ethan Herron, Zhanhong Jiang, Aditya Balu, Chih-Hsuan Yang, Kumar Saurabh, Anushrut Jignasu, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian
Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention.
no code implementations • 24 Oct 2022 • Aarshvi Gajjar, Chinmay Hegde, Christopher Musco
Namely, we can collect samples via statistical \emph{leverage score sampling}, which has been shown to be near-optimal in other active learning scenarios.
1 code implementation • 13 Oct 2022 • Benjamin Feuer, Ameya Joshi, Chinmay Hegde
Vision language (VL) models like CLIP are robust to natural distribution shifts, in part because CLIP learns on unstructured data using a technique called caption supervision; the model inteprets image-linked texts as ground-truth labels.
1 code implementation • 12 Oct 2022 • Govind Mittal, Chinmay Hegde, Nasir Memon
With the rise of AI-enabled Real-Time Deepfakes (RTDFs), the integrity of online video interactions has become a growing concern.
no code implementations • 21 Sep 2022 • Zhanhong Jiang, Aditya Balu, Xian Yeow Lee, Young M. Lee, Chinmay Hegde, Soumik Sarkar
To address these two issues, we propose a novel composite regret with a new network regret-based metric to evaluate distributed online optimization algorithms.
no code implementations • 11 Sep 2022 • Feyza Duman Keles, Pruthuvi Mahesakya Wijewardena, Chinmay Hegde
Transformer architectures have led to remarkable progress in many state-of-art applications.
no code implementations • 17 Jun 2022 • Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde
We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases.
no code implementations • 15 Jun 2022 • Benjamin Feuer, Ameya Joshi, Chinmay Hegde
State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts.
no code implementations • 12 May 2022 • Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde
Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.
1 code implementation • 4 Feb 2022 • Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.
1 code implementation • 6 Dec 2021 • Zhanhong Jiang, Xian Yeow Lee, Sin Yong Tan, Kai Liang Tan, Aditya Balu, Young M. Lee, Chinmay Hegde, Soumik Sarkar
We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variance-reduction techniques and does not require large batches over iterations.
Multi-agent Reinforcement Learning Policy Gradient Methods +4
no code implementations • 8 Oct 2021 • Ameya Joshi, Gauri Jagatap, Chinmay Hegde
Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.
no code implementations • NeurIPS 2021 • Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde
Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.
no code implementations • 4 Oct 2021 • Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian
We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs).
1 code implementation • NeurIPS 2021 • Jiangyuan Li, Thanh V. Nguyen, Chinmay Hegde, Raymond K. W. Wong
In this paper, we study the implicit bias of gradient descent for sparse regression.
no code implementations • 17 Jun 2021 • Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde
The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.
no code implementations • 13 May 2021 • Viraj Shah, Rakib Hyder, M. Salman Asif, Chinmay Hegde
The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by deep generative networks).
no code implementations • 29 Apr 2021 • Anjana Deva Prasad, Aditya Balu, Harshil Shah, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy
These derivatives are used to define an approximate Jacobian used for performing the "backward" evaluation to train the deep learning models.
no code implementations • 29 Apr 2021 • Aditya Balu, Sergio Botelho, Biswajit Khara, Vinay Rao, Chinmay Hegde, Soumik Sarkar, Santi Adavani, Adarsh Krishnamurthy, Baskar Ganapathysubramanian
We specifically consider neural solvers for the generalized 3D Poisson equation over megavoxel domains.
1 code implementation • 2 Mar 2021 • Yasaman Esfandiari, Sin Yong Tan, Zhanhong Jiang, Aditya Balu, Ethan Herron, Chinmay Hegde, Soumik Sarkar
Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i. e., derivatives of its model with respect to its neighbors' datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP).
no code implementations • 25 Feb 2021 • Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde
Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.
no code implementations • NeurIPS Workshop LMCA 2020 • Minsu Cho, Ameya Joshi, Xian Yeow Lee, Aditya Balu, Adarsh Krishnamurthy, Baskar Ganapathysubramanian, Soumik Sarkar, Chinmay Hegde
The paradigm of differentiable programming has considerably enhanced the scope of machine learning via the judicious use of gradient-based optimization.
no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.
no code implementations • 24 Jul 2020 • Sergio Botelho, Ameya Joshi, Biswajit Khara, Soumik Sarkar, Chinmay Hegde, Santi Adavani, Baskar Ganapathysubramanian
Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements.
no code implementations • 7 Jul 2020 • Minsu Cho, Mohammadreza Soltani, Chinmay Hegde
In this paper, we study two important problems in the automated design of neural networks -- Hyper-parameter Optimization (HPO), and Neural Architecture Search (NAS) -- through the lens of sparse recovery methods.
1 code implementation • 28 Jun 2020 • Minsu Cho, Ameya Joshi, Chinmay Hegde
Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Thanh V. Nguyen, Youssef Mroueh, Samuel Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde
We consider the problem of optimizing by sampling under multiple black-box constraints in nano-material design.
no code implementations • 27 Nov 2019 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde
Starting from a randomly initialized autoencoder network, we rigorously prove the linear convergence of gradient descent in two learning regimes, namely: (i) the weakly-trained regime where only the encoder is trained, and (ii) the jointly-trained regime where both the encoder and the decoder are trained.
no code implementations • 15 Oct 2019 • Zhanhong Jiang, Aditya Balu, Sin Yong Tan, Young M. Lee, Chinmay Hegde, Soumik Sarkar
In this paper, we investigate the popular deep learning optimization routine, Adam, from the perspective of statistical moments.
no code implementations • 25 Sep 2019 • Thanh V Nguyen, Youssef Mroueh, Samuel C. Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde
We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Gauri Jagatap, Chinmay Hegde
Untrained deep neural networks as image priors have been recently introduced for linear inverse imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over hand-crafted image priors such as sparsity.
1 code implementation • 5 Sep 2019 • Xian Yeow Lee, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, Soumik Sarkar
In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack.
2 code implementations • NeurIPS 2019 • Gauri Jagatap, Chinmay Hegde
Specifically, we consider the problem of solving linear inverse problems, such as compressive sensing, as well as non-linear problems, such as compressive phase retrieval.
1 code implementation • 7 Jun 2019 • Minsu Cho, Mohammadreza Soltani, Chinmay Hegde
Neural Architecture Search remains a very challenging meta-learning problem.
no code implementations • 4 Jun 2019 • Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde
Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions.
no code implementations • 24 Apr 2019 • Minsu Cho, Chinmay Hegde
We propose a new algorithm for hyperparameter selection in machine learning algorithms.
1 code implementation • ICCV 2019 • Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde
We propose a novel approach to generate such `semantic' adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model.
no code implementations • 11 Apr 2019 • Chinmay Hegde, Fritz Keinert, Eric S. Weber
We introduce a modified Kaczmarz algorithm for solving systems of linear equations in a distributed environment, i. e. the equations within the system are distributed over multiple nodes within a network.
no code implementations • 7 Mar 2019 • Rakib Hyder, Viraj Shah, Chinmay Hegde, M. Salman Asif
We empirically show that the performance of our method with projected gradient descent is superior to the existing approach for solving phase retrieval under generative priors.
1 code implementation • 3 Dec 2018 • Viraj Shah, Chinmay Hegde
We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements).
no code implementations • 21 Nov 2018 • Rahul Singh, Viraj Shah, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde
The first model is a WGAN model that uses a finite number of training images to synthesize new microstructures that weakly satisfy the physical invariances respected by the original data.
no code implementations • 8 Oct 2018 • Chinmay Hegde
The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by generative adversarial networks, or GANs).
no code implementations • 20 Jun 2018 • Gauri Jagatap, Chinmay Hegde
We propose and analyze a new family of algorithms for training neural networks with ReLU activations.
no code implementations • 2 Jun 2018 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde
For each of these models, we prove that under suitable choices of hyperparameters, architectures, and initialization, autoencoders learned by gradient descent can successfully recover the parameters of the corresponding model.
no code implementations • 30 May 2018 • Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar
In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality.
no code implementations • ICML 2018 • Thanh V. Nguyen, Akshay Soni, Chinmay Hegde
Second, we propose an initialization algorithm that utilizes a small number of extra fully observed samples to produce such a coarse initial estimate.
1 code implementation • 23 Feb 2018 • Viraj Shah, Chinmay Hegde
In this work, we advocate the idea of replacing hand-crafted priors, such as sparsity, with a Generative Adversarial Network (GAN) to solve linear inverse problems such as compressive sensing.
no code implementations • 8 Dec 2017 • Mohammadreza Soltani, Chinmay Hegde
In this paper, we provide a novel algorithmic framework that achieves the best of both worlds: asymptotically as fast as factorization methods, while requiring no dependency on the condition number.
no code implementations • NeurIPS 2017 • Gauri Jagatap, Chinmay Hegde
For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n).
no code implementations • 16 Nov 2017 • Aditya Balu, Thanh V. Nguyen, Apurva Kokate, Chinmay Hegde, Soumik Sarkar
We introduce a new, systematic framework for visualizing information flow in deep networks.
1 code implementation • 9 Nov 2017 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde
To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees.
no code implementations • 29 Sep 2017 • Viraj Shah, Mohammadreza Soltani, Chinmay Hegde
We consider the problem of reconstructing signals and images from periodic nonlinearities.
no code implementations • 8 Aug 2017 • Mohammadreza Soltani, Chinmay Hegde
We consider the demixing problem of two (or more) structured high-dimensional vectors from a limited number of nonlinear observations where this nonlinearity is due to either a periodic or an aperiodic function.
no code implementations • 27 Jun 2017 • Mohammadreza Soltani, Chinmay Hegde
Existing methods for this problem assume that the precision matrix of the observed variables is the superposition of a sparse and a low-rank component.
no code implementations • NeurIPS 2017 • Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar
There is significant recent interest to parallelize deep learning algorithms in order to handle the enormous growth in data and model sizes.
no code implementations • 21 May 2017 • Mohammadreza Soltani, Chinmay Hegde
We consider the problem of estimation of a low-rank matrix from a limited number of noisy rank-one projections.
1 code implementation • 18 May 2017 • Gauri Jagatap, Chinmay Hegde
For this problem, we design a recovery algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log n)$.
no code implementations • 23 Jan 2017 • Mohammadreza Soltani, Chinmay Hegde
Specifically, we show that for certain types of structured superposition models, our method provably recovers the components given merely $n = \mathcal{O}(s)$ samples where $s$ denotes the number of nonzero entries in the underlying components.
no code implementations • 23 Jan 2017 • Mohammadreza Soltani, Chinmay Hegde
Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets.
no code implementations • NeurIPS 2016 • Chinmay Hegde, Piotr Indyk, Ludwig Schmidt
We address the problem of recovering a high-dimensional but structured vector from linear observations in a general setting where the vector can come from an arbitrary union of subspaces.
no code implementations • 3 Aug 2016 • Mohammadreza Soltani, Chinmay Hegde
We study the problem of demixing a pair of sparse signals from noisy, nonlinear observations of their superposition.
no code implementations • 28 Feb 2015 • Chinmay Hegde, Oncel Tuzel, Fatih Porikli
1) For the edge layer, we use a nonparametric approach by constructing a dictionary of patches from a given image, and synthesize edge regions in a higher-resolution version of the image.
no code implementations • NeurIPS 2008 • Volkan Cevher, Marco F. Duarte, Chinmay Hegde, Richard Baraniuk
Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals.
no code implementations • NeurIPS 2007 • Chinmay Hegde, Michael Wakin, Richard Baraniuk
First, we show that with a small number $M$ of {\em random projections} of sample points in $\reals^N$ belonging to an unknown $K$-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.