Search Results for author: Chinmay Hegde

Found 93 papers, 32 papers with code

Hidden in the Noise: Two-Stage Robust Watermarking for Images

no code implementations5 Dec 2024 Kasra Arabi, Benjamin Feuer, R. Teal Witter, Chinmay Hegde, Niv Cohen

For detection, we (i) retrieve the relevant group of noises, and (ii) search within the given group for an initial noise that might match our image.

TruncFormer: Private LLM Inference Using Only Truncations

no code implementations2 Dec 2024 Patrick Yubeaton, Jianqiao Cambridge Mo, Karthik Garimella, Nandan Kumar Jha, Brandon Reagen, Chinmay Hegde, Siddharth Garg

Private inference (PI) serves an important role in guaranteeing the privacy of user data when interfacing with proprietary machine learning models such as LLMs.

SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification

1 code implementation7 Oct 2024 Benjamin Feuer, Jiawei Xu, Niv Cohen, Patrick Yubeaton, Govind Mittal, Chinmay Hegde

In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification.

Image Classification Synthetic Data Generation

FlowBench: A Large Scale Benchmark for Flow Simulation over Complex Geometries

no code implementations26 Sep 2024 Ronak Tali, Ali Rabeh, Cheng-Hau Yang, Mehdi Shadkhah, Samundra Karki, Abhisek Upadhyaya, Suriya Dhakshinamoorthy, Marjan Saadati, Soumik Sarkar, Adarsh Krishnamurthy, Chinmay Hegde, Aditya Balu, Baskar Ganapathysubramanian

FlowBench contains over 10K data samples, with each sample the outcome of a fully resolved, direct numerical simulation using a well-validated simulator framework designed for modeling transport phenomena in complex geometries.

Evaluation and Comparison of Visual Language Models for Transportation Engineering Problems

1 code implementation3 Sep 2024 Sanjita Prajapati, Tanu Singh, Chinmay Hegde, Pranamesh Chakraborty

In this study, we have explored state-of-the-art VLM models for vision-based transportation engineering tasks such as image classification and object detection.

Image Classification object-detection +2

LiveBench: A Challenging, Contamination-Free LLM Benchmark

1 code implementation27 Jun 2024 Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Siddartha Naidu, Chinmay Hegde, Yann Lecun, Tom Goldstein, Willie Neiswanger, Micah Goldblum

In this work, we introduce a new benchmark for LLMs designed to be immune to both test set contamination and the pitfalls of LLM judging and human crowdsourcing.

Instruction Following Math

DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models

1 code implementation CVPR 2024 Nastaran Saadati, Minh Pham, Nasla Saleem, Joshua R. Waite, Aditya Balu, Zhanhong Jiang, Chinmay Hegde, Soumik Sarkar

This DIMAT paradigm presents a new opportunity for the future decentralized learning, enhancing its adaptability to real-world with sparse and light-weight communication and computation.

Deep Learning

Robust Concept Erasure Using Task Vectors

no code implementations4 Apr 2024 Minh Pham, Kelly O. Marshall, Chinmay Hegde, Niv Cohen

Finally, we show that Diverse Inversion enables us to apply a TV edit only to a subset of the model weights, enhancing the erasure capabilities while better maintaining the core functionality of the model.

Word Embeddings

TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks

2 code implementations17 Feb 2024 Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, Colin White

Notably, TabPFN achieves very strong performance on small tabular datasets but is not designed to make predictions for datasets of size larger than 1000.

Fairness In-Context Learning +2

Scaling TabPFN: Sketching and Feature Selection for Tabular Prior-Data Fitted Networks

no code implementations17 Nov 2023 Benjamin Feuer, Chinmay Hegde, Niv Cohen

Tabular classification has traditionally relied on supervised algorithms, which estimate the parameters of a prediction model using its training data.

feature selection tabular-classification

Exploring Dataset-Scale Indicators of Data Quality

no code implementations7 Nov 2023 Benjamin Feuer, Chinmay Hegde

Modern computer vision foundation models are trained on massive amounts of data, incurring large economic and environmental costs.

ArcheType: A Novel Framework for Open-Source Column Type Annotation using Large Language Models

1 code implementation27 Oct 2023 Benjamin Feuer, Yurong Liu, Chinmay Hegde, Juliana Freire

We introduce ArcheType, a simple, practical method for context sampling, prompt serialization, model querying, and label remapping, which enables large language models to solve CTA problems in a fully zero-shot manner.

 Ranked #1 on Column Type Annotation on WDC SOTAB (Weighted F1 metric)

Column Type Annotation Zero-Shot Learning

PriViT: Vision Transformers for Fast Private Inference

1 code implementation6 Oct 2023 Naren Dhyani, Jianqiao Mo, Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

The Vision Transformer (ViT) architecture has emerged as the backbone of choice for state-of-the-art deep models for computer vision applications.

Image Classification

On the Fine-Grained Hardness of Inverting Generative Models

no code implementations11 Sep 2023 Feyza Duman Keles, Chinmay Hegde

This paper aims to provide a fine-grained view of the landscape of computational hardness for this problem.

Distributionally Robust Classification on a Data Budget

1 code implementation7 Aug 2023 Benjamin Feuer, Ameya Joshi, Minh Pham, Chinmay Hegde

To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.

Classification Image Classification +1

Circumventing Concept Erasure Methods For Text-to-Image Generative Models

1 code implementation3 Aug 2023 Minh Pham, Kelly O. Marshall, Niv Cohen, Govind Mittal, Chinmay Hegde

Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public.

Face Swapping Word Embeddings

Identity-Preserving Aging of Face Images via Latent Diffusion Models

1 code implementation17 Jul 2023 Sudipta Banerjee, Govind Mittal, Ameya Joshi, Chinmay Hegde, Nasir Memon

The performance of automated face recognition systems is inevitably impacted by the facial aging process.

Face Recognition

Vision-Language Models can Identify Distracted Driver Behavior from Naturalistic Videos

1 code implementation16 Jun 2023 Md Zahid Hasan, Jiajing Chen, Jiyang Wang, Mohammed Shaiqur Rahman, Ameya Joshi, Senem Velipasalar, Chinmay Hegde, Anuj Sharma, Soumik Sarkar

Our results show that this framework offers state-of-the-art performance on zero-shot transfer and video-based CLIP for predicting the driver's state on two public datasets.

Activity Recognition

ZeroForge: Feedforward Text-to-Shape Without 3D Supervision

1 code implementation14 Jun 2023 Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya Balu, Adarsh Krishnamurthy, Chinmay Hegde

Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations.

Text-to-Shape Generation

When Do Neural Nets Outperform Boosted Trees on Tabular Data?

2 code implementations NeurIPS 2023 Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, Vishak Prasad C, Benjamin Feuer, Chinmay Hegde, Ganesh Ramakrishnan, Micah Goldblum, Colin White

To this end, we conduct the largest tabular data analysis to date, comparing 19 algorithms across 176 datasets, and we find that the 'NN vs. GBDT' debate is overemphasized: for a surprisingly high number of datasets, either the performance difference between GBDTs and NNs is negligible, or light hyperparameter tuning on a GBDT is more important than choosing between NNs and GBDTs.

LiT Tuned Models for Efficient Species Detection

1 code implementation12 Feb 2023 Andre Nakkab, Benjamin Feuer, Chinmay Hegde

Recent advances in training vision-language models have demonstrated unprecedented robustness and transfer learning effectiveness; however, standard computer vision datasets are image-only, and therefore not well adapted to such training methods.

Fine-Grained Image Classification Transfer Learning +1

Implicit Regularization for Group Sparsity

1 code implementation29 Jan 2023 Jiangyuan Li, Thanh V. Nguyen, Chinmay Hegde, Raymond K. W. Wong

We study the implicit regularization of gradient descent towards structured sparsity via a novel neural reparameterization, which we call a diagonally grouped linear neural network.

regression

Pathfinding Neural Cellular Automata

no code implementations17 Jan 2023 Sam Earle, Ozlem Yildiz, Julian Togelius, Chinmay Hegde

As a step toward developing such networks, we hand-code and learn models for Breadth-First Search (BFS), i. e. shortest path finding, using the unified architectural framework of Neural Cellular Automata, which are iterative neural networks with equal-size inputs and outputs.

Neural PDE Solvers for Irregular Domains

no code implementations7 Nov 2022 Biswajit Khara, Ethan Herron, Zhanhong Jiang, Aditya Balu, Chih-Hsuan Yang, Kumar Saurabh, Anushrut Jignasu, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian

Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention.

Active Learning for Single Neuron Models with Lipschitz Non-Linearities

no code implementations24 Oct 2022 Aarshvi Gajjar, Chinmay Hegde, Christopher Musco

Namely, we can collect samples via statistical \emph{leverage score sampling}, which has been shown to be near-optimal in other active learning scenarios.

Active Learning

Caption supervision enables robust learners

1 code implementation13 Oct 2022 Benjamin Feuer, Ameya Joshi, Chinmay Hegde

Vision language (VL) models like CLIP are robust to natural distribution shifts, in part because CLIP learns on unstructured data using a technique called caption supervision; the model inteprets image-linked texts as ground-truth labels.

GOTCHA: Real-Time Video Deepfake Detection via Challenge-Response

1 code implementation12 Oct 2022 Govind Mittal, Chinmay Hegde, Nasir Memon

With the rise of AI-enabled Real-Time Deepfakes (RTDFs), the integrity of online video interactions has become a growing concern.

DeepFake Detection Face Swapping

Distributed Online Non-convex Optimization with Composite Regret

no code implementations21 Sep 2022 Zhanhong Jiang, Aditya Balu, Xian Yeow Lee, Young M. Lee, Chinmay Hegde, Soumik Sarkar

To address these two issues, we propose a novel composite regret with a new network regret-based metric to evaluate distributed online optimization algorithms.

On The Computational Complexity of Self-Attention

no code implementations11 Sep 2022 Feyza Duman Keles, Pruthuvi Mahesakya Wijewardena, Chinmay Hegde

Transformer architectures have led to remarkable progress in many state-of-art applications.

Revisiting Self-Distillation

no code implementations17 Jun 2022 Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde

We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases.

Knowledge Distillation Model Compression

A Meta-Analysis of Distributionally-Robust Models

no code implementations15 Jun 2022 Benjamin Feuer, Ameya Joshi, Chinmay Hegde

State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts.

Smooth-Reduce: Leveraging Patches for Improved Certified Robustness

no code implementations12 May 2022 Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde

Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.

Selective Network Linearization for Efficient Private Inference

1 code implementation4 Feb 2022 Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.

MDPGT: Momentum-based Decentralized Policy Gradient Tracking

1 code implementation6 Dec 2021 Zhanhong Jiang, Xian Yeow Lee, Sin Yong Tan, Kai Liang Tan, Aditya Balu, Young M. Lee, Chinmay Hegde, Soumik Sarkar

We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variance-reduction techniques and does not require large batches over iterations.

Multi-agent Reinforcement Learning Policy Gradient Methods +4

Adversarial Token Attacks on Vision Transformers

no code implementations8 Oct 2021 Ameya Joshi, Gauri Jagatap, Chinmay Hegde

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.

Differentiable Spline Approximations

no code implementations NeurIPS 2021 Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde

Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.

3D Point Cloud Reconstruction BIG-bench Machine Learning +3

NeuFENet: Neural Finite Element Solutions with Theoretical Bounds for Parametric PDEs

no code implementations4 Oct 2021 Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian

We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs).

Sphynx: ReLU-Efficient Network Design for Private Inference

no code implementations17 Jun 2021 Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde

The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.

Provably Convergent Algorithms for Solving Inverse Problems Using Generative Models

no code implementations13 May 2021 Viraj Shah, Rakib Hyder, M. Salman Asif, Chinmay Hegde

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by deep generative networks).

NURBS-Diff: A Differentiable Programming Module for NURBS

no code implementations29 Apr 2021 Anjana Deva Prasad, Aditya Balu, Harshil Shah, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy

These derivatives are used to define an approximate Jacobian used for performing the "backward" evaluation to train the deep learning models.

BIG-bench Machine Learning Deep Learning +1

Cross-Gradient Aggregation for Decentralized Learning from Non-IID data

1 code implementation2 Mar 2021 Yasaman Esfandiari, Sin Yong Tan, Zhanhong Jiang, Aditya Balu, Ethan Herron, Chinmay Hegde, Soumik Sarkar

Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i. e., derivatives of its model with respect to its neighbors' datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP).

Continual Learning

Provable Compressed Sensing with Generative Priors via Langevin Dynamics

no code implementations25 Feb 2021 Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde

Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.

Retrieval Super-Resolution

Deep Generative Models that Solve PDEs: Distributed Computing for Training Large Data-Free Models

no code implementations24 Jul 2020 Sergio Botelho, Ameya Joshi, Biswajit Khara, Soumik Sarkar, Chinmay Hegde, Santi Adavani, Baskar Ganapathysubramanian

Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements.

Decoder Distributed Computing

Hyperparameter Optimization in Neural Networks via Structured Sparse Recovery

no code implementations7 Jul 2020 Minsu Cho, Mohammadreza Soltani, Chinmay Hegde

In this paper, we study two important problems in the automated design of neural networks -- Hyper-parameter Optimization (HPO), and Neural Architecture Search (NAS) -- through the lens of sparse recovery methods.

Hyperparameter Optimization Neural Architecture Search

ESPN: Extremely Sparse Pruned Networks

1 code implementation28 Jun 2020 Minsu Cho, Ameya Joshi, Chinmay Hegde

Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems.

Network Pruning

Benefits of Jointly Training Autoencoders: An Improved Neural Tangent Kernel Analysis

no code implementations27 Nov 2019 Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

Starting from a randomly initialized autoencoder network, we rigorously prove the linear convergence of gradient descent in two learning regimes, namely: (i) the weakly-trained regime where only the encoder is trained, and (ii) the jointly-trained regime where both the encoder and the decoder are trained.

On Higher-order Moments in Adam

no code implementations15 Oct 2019 Zhanhong Jiang, Aditya Balu, Sin Yong Tan, Young M. Lee, Chinmay Hegde, Soumik Sarkar

In this paper, we investigate the popular deep learning optimization routine, Adam, from the perspective of statistical moments.

Surrogate-Based Constrained Langevin Sampling With Applications to Optimal Material Configuration Design

no code implementations25 Sep 2019 Thanh V Nguyen, Youssef Mroueh, Samuel C. Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde

We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied.

Phase Retrieval using Untrained Neural Network Priors

no code implementations NeurIPS Workshop Deep_Invers 2019 Gauri Jagatap, Chinmay Hegde

Untrained deep neural networks as image priors have been recently introduced for linear inverse imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over hand-crafted image priors such as sparsity.

Compressive Sensing Denoising +2

Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents

1 code implementation5 Sep 2019 Xian Yeow Lee, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, Soumik Sarkar

In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack.

Deep Reinforcement Learning reinforcement-learning +1

Algorithmic Guarantees for Inverse Imaging with Untrained Network Priors

2 code implementations NeurIPS 2019 Gauri Jagatap, Chinmay Hegde

Specifically, we consider the problem of solving linear inverse problems, such as compressive sensing, as well as non-linear problems, such as compressive phase retrieval.

Compressive Sensing Denoising +2

Encoding Invariances in Deep Generative Models

no code implementations4 Jun 2019 Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions.

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

1 code implementation ICCV 2019 Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde

We propose a novel approach to generate such `semantic' adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model.

A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations

no code implementations11 Apr 2019 Chinmay Hegde, Fritz Keinert, Eric S. Weber

We introduce a modified Kaczmarz algorithm for solving systems of linear equations in a distributed environment, i. e. the equations within the system are distributed over multiple nodes within a network.

Alternating Phase Projected Gradient Descent with Generative Priors for Solving Compressive Phase Retrieval

no code implementations7 Mar 2019 Rakib Hyder, Viraj Shah, Chinmay Hegde, M. Salman Asif

We empirically show that the performance of our method with projected gradient descent is superior to the existing approach for solving phase retrieval under generative priors.

Retrieval

Signal Reconstruction from Modulo Observations

1 code implementation3 Dec 2018 Viraj Shah, Chinmay Hegde

We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements).

Retrieval

Physics-aware Deep Generative Models for Creating Synthetic Microstructures

no code implementations21 Nov 2018 Rahul Singh, Viraj Shah, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

The first model is a WGAN model that uses a finite number of training images to synthesize new microstructures that weakly satisfy the physical invariances respected by the original data.

Stochastic Optimization

Algorithmic Aspects of Inverse Problems Using Generative Models

no code implementations8 Oct 2018 Chinmay Hegde

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by generative adversarial networks, or GANs).

Learning ReLU Networks via Alternating Minimization

no code implementations20 Jun 2018 Gauri Jagatap, Chinmay Hegde

We propose and analyze a new family of algorithms for training neural networks with ReLU activations.

Autoencoders Learn Generative Linear Models

no code implementations2 Jun 2018 Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

For each of these models, we prove that under suitable choices of hyperparameters, architectures, and initialization, autoencoders learned by gradient descent can successfully recover the parameters of the corresponding model.

On Consensus-Optimality Trade-offs in Collaborative Deep Learning

no code implementations30 May 2018 Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality.

Deep Learning Navigate

On Learning Sparsely Used Dictionaries from Incomplete Samples

no code implementations ICML 2018 Thanh V. Nguyen, Akshay Soni, Chinmay Hegde

Second, we propose an initialization algorithm that utilizes a small number of extra fully observed samples to produce such a coarse initial estimate.

Dictionary Learning

Solving Linear Inverse Problems Using GAN Priors: An Algorithm with Provable Guarantees

1 code implementation23 Feb 2018 Viraj Shah, Chinmay Hegde

In this work, we advocate the idea of replacing hand-crafted priors, such as sparsity, with a Generative Adversarial Network (GAN) to solve linear inverse problems such as compressive sensing.

Compressive Sensing Generative Adversarial Network

Fast Low-Rank Matrix Estimation without the Condition Number

no code implementations8 Dec 2017 Mohammadreza Soltani, Chinmay Hegde

In this paper, we provide a novel algorithmic framework that achieves the best of both worlds: asymptotically as fast as factorization methods, while requiring no dependency on the condition number.

Fast, Sample-Efficient Algorithms for Structured Phase Retrieval

no code implementations NeurIPS 2017 Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n).

Retrieval

A Forward-Backward Approach for Visualizing Information Flow in Deep Networks

no code implementations16 Nov 2017 Aditya Balu, Thanh V. Nguyen, Apurva Kokate, Chinmay Hegde, Soumik Sarkar

We introduce a new, systematic framework for visualizing information flow in deep networks.

Provably Accurate Double-Sparse Coding

1 code implementation9 Nov 2017 Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde

To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees.

Demixing Structured Superposition Signals from Periodic and Aperiodic Nonlinear Observations

no code implementations8 Aug 2017 Mohammadreza Soltani, Chinmay Hegde

We consider the demixing problem of two (or more) structured high-dimensional vectors from a limited number of nonlinear observations where this nonlinearity is due to either a periodic or an aperiodic function.

Fast Algorithms for Learning Latent Variables in Graphical Models

no code implementations27 Jun 2017 Mohammadreza Soltani, Chinmay Hegde

Existing methods for this problem assume that the precision matrix of the observed variables is the superposition of a sparse and a low-rank component.

Collaborative Deep Learning in Fixed Topology Networks

no code implementations NeurIPS 2017 Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar

There is significant recent interest to parallelize deep learning algorithms in order to handle the enormous growth in data and model sizes.

Deep Learning

Improved Algorithms for Matrix Recovery from Rank-One Projections

no code implementations21 May 2017 Mohammadreza Soltani, Chinmay Hegde

We consider the problem of estimation of a low-rank matrix from a limited number of noisy rank-one projections.

Sample-Efficient Algorithms for Recovering Structured Signals from Magnitude-Only Measurements

1 code implementation18 May 2017 Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log n)$.

Retrieval

Iterative Thresholding for Demixing Structured Superpositions in High Dimensions

no code implementations23 Jan 2017 Mohammadreza Soltani, Chinmay Hegde

Specifically, we show that for certain types of structured superposition models, our method provably recovers the components given merely $n = \mathcal{O}(s)$ samples where $s$ denotes the number of nonzero entries in the underlying components.

Vocal Bursts Intensity Prediction

Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

no code implementations23 Jan 2017 Mohammadreza Soltani, Chinmay Hegde

Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets.

Dimensionality Reduction

Fast recovery from a union of subspaces

no code implementations NeurIPS 2016 Chinmay Hegde, Piotr Indyk, Ludwig Schmidt

We address the problem of recovering a high-dimensional but structured vector from linear observations in a general setting where the vector can come from an arbitrary union of subspaces.

Compressive Sensing

Fast Algorithms for Demixing Sparse Signals from Nonlinear Observations

no code implementations3 Aug 2016 Mohammadreza Soltani, Chinmay Hegde

We study the problem of demixing a pair of sparse signals from noisy, nonlinear observations of their superposition.

Astronomy

Efficient Upsampling of Natural Images

no code implementations28 Feb 2015 Chinmay Hegde, Oncel Tuzel, Fatih Porikli

1) For the edge layer, we use a nonparametric approach by constructing a dictionary of patches from a given image, and synthesize edge regions in a higher-resolution version of the image.

Sparse Signal Recovery Using Markov Random Fields

no code implementations NeurIPS 2008 Volkan Cevher, Marco F. Duarte, Chinmay Hegde, Richard Baraniuk

Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals.

Compressive Sensing

Random Projections for Manifold Learning

no code implementations NeurIPS 2007 Chinmay Hegde, Michael Wakin, Richard Baraniuk

First, we show that with a small number $M$ of {\em random projections} of sample points in $\reals^N$ belonging to an unknown $K$-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.