Search Results for author: Pratik Chaudhari

Found 29 papers, 13 papers with code

Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity

no code implementations25 Feb 2022 Shiyun Xu, Zhiqi Bu, Pratik Chaudhari, Ian J. Barnett

In order to empower NAM with feature selection and improve the generalization, we propose the sparse neural additive models (SNAM) that employ the group sparsity regularization (e. g. Group LASSO), where each feature is learned by a sub-network whose trainable parameters are clustered as a group.

Additive models Interpretable Machine Learning

Deep Reference Priors: What is the best way to pretrain a model?

no code implementations pproximateinference AABI Symposium 2022 Yansong Gao, Rahul Ramesh, Pratik Chaudhari

Second, by using labeled data from the source task to compute the reference prior, we develop a new pretraining method for transfer learning that allows data from the target task to maximally affect the Bayesian posterior.

Image Classification Transfer Learning

Does the Data Induce Capacity Control in Deep Learning?

no code implementations27 Oct 2021 Rubing Yang, Jialin Mao, Pratik Chaudhari

This structure is mirrored in a network trained on this data: we show that the Hessian and the Fisher Information Matrix (FIM) have eigenvalues that are spread uniformly over exponentially large ranges.

Generalization Bounds

Model Zoo: A Growing Brain That Learns Continually

no code implementations ICLR 2022 Rahul Ramesh, Pratik Chaudhari

This paper argues that continual learning methods can benefit by splitting the capacity of the learner across multiple models.

Continual Learning Learning Theory

Harmonization with Flow-based Causal Inference

1 code implementation12 Jun 2021 Rongguang Wang, Pratik Chaudhari, Christos Davatzikos

Heterogeneity in medical data, e. g., from data collected at different sites and with different protocols in a clinical study, is a fundamental hurdle for accurate prediction using machine learning models, as such models often fail to generalize well.

Counterfactual Inference Domain Generalization

Model Zoo: A Growing "Brain" That Learns Continually

1 code implementation6 Jun 2021 Rahul Ramesh, Pratik Chaudhari

This paper argues that continual learning methods can benefit by splitting the capacity of the learner across multiple models.

Continual Learning Learning Theory

Deformable Linear Object Prediction Using Locally Linear Latent Dynamics

1 code implementation26 Mar 2021 Wenbo Zhang, Karl Schmeckpeper, Pratik Chaudhari, Kostas Daniilidis

We empirically demonstrate that our approach can predict the rope state accurately up to ten steps into the future and that our algorithm can find the optimal action given an initial state and a goal state.

Embracing the Disharmony in Medical Imaging: A Simple and Effective Framework for Domain Adaptation

no code implementations23 Mar 2021 Rongguang Wang, Pratik Chaudhari, Christos Davatzikos

We can also tackle situations where we do not have access to ground-truth labels on target data; we show how one can use auxiliary tasks for adaptation; these tasks employ covariates such as age, gender and race which are easy to obtain but nevertheless correlated to the main task.

Auxiliary Learning Domain Generalization +1

Continuous Doubly Constrained Batch Reinforcement Learning

1 code implementation NeurIPS 2021 Rasool Fakoor, Jonas Mueller, Kavosh Asadi, Pratik Chaudhari, Alexander J. Smola

Reliant on too many experiments to learn good actions, current Reinforcement Learning (RL) algorithms have limited applicability in real-world settings, which can be too expensive to allow exploration.


Scalable Reinforcement Learning Policies for Multi-Agent Control

1 code implementation16 Nov 2020 Christopher D. Hsu, Heejin Jeong, George J. Pappas, Pratik Chaudhari

Our method can handle an arbitrary number of pursuers and targets; we show results for tasks consisting up to 1000 pursuers tracking 1000 targets.

Multi-agent Reinforcement Learning reinforcement-learning

An Information-Geometric Distance on the Space of Tasks

1 code implementation NeurIPS Workshop DL-IG 2020 Yansong Gao, Pratik Chaudhari

Using tools in information geometry, the distance is defined to be the length of the shortest weight trajectory on a Riemannian manifold as a classifier is fitted on an interpolated task.

Image Classification Transfer Learning

Proximal Deterministic Policy Gradient

no code implementations3 Aug 2020 Marco Maggipinto, Gian Antonio Susto, Pratik Chaudhari

This paper introduces two simple techniques to improve off-policy Reinforcement Learning (RL) algorithms.

Continuous Control reinforcement-learning

DDPG++: Striving for Simplicity in Continuous-control Off-Policy Reinforcement Learning

no code implementations26 Jun 2020 Rasool Fakoor, Pratik Chaudhari, Alexander J. Smola

This paper prescribes a suite of techniques for off-policy Reinforcement Learning (RL) that simplify the training process and reduce the sample complexity.

Continuous Control reinforcement-learning

Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation

1 code implementation NeurIPS 2020 Rasool Fakoor, Jonas Mueller, Nick Erickson, Pratik Chaudhari, Alexander J. Smola

Automated machine learning (AutoML) can produce complex model ensembles by stacking, bagging, and boosting many individual models like trees, deep networks, and nearest neighbor estimators.

AutoML Data Augmentation

BayesRace: Learning to race autonomously using prior experience

1 code implementation10 May 2020 Achin Jain, Matthew O'Kelly, Pratik Chaudhari, Manfred Morari

Autonomous race cars require perception, estimation, planning, and control modules which work together asynchronously while driving at the limit of a vehicle's handling capability.

TraDE: Transformers for Density Estimation

no code implementations6 Apr 2020 Rasool Fakoor, Pratik Chaudhari, Jonas Mueller, Alexander J. Smola

We present TraDE, a self-attention-based architecture for auto-regressive density estimation with continuous and discrete valued data.

Density Estimation Out-of-Distribution Detection

A Free-Energy Principle for Representation Learning

no code implementations ICML 2020 Yansong Gao, Pratik Chaudhari

This paper employs a formal connection of machine learning with thermodynamics to characterize the quality of learnt representations for transfer learning.

Classification General Classification +3

Rethinking the Hyperparameters for Fine-tuning

1 code implementation ICLR 2020 Hao Li, Pratik Chaudhari, Hao Yang, Michael Lam, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.

Transfer Learning

Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications

no code implementations8 Oct 2019 Matteo Terzi, Gian Antonio Susto, Pratik Chaudhari

Adversarial Training is a training procedure aiming at providing models that are robust to worst-case perturbations around predefined points.

Classification General Classification


1 code implementation ICLR 2020 Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola

This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL).

Continuous Control Meta Reinforcement Learning +2

A Baseline for Few-Shot Image Classification

no code implementations ICLR 2020 Guneet S. Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto

When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters.

Classification Few-Shot Image Classification +1

P3O: Policy-on Policy-off Policy Optimization

1 code implementation5 May 2019 Rasool Fakoor, Pratik Chaudhari, Alexander J. Smola

Extensive experiments on the Atari-2600 and MuJoCo benchmark suites show that this simple technique is effective in reducing the sample complexity of state-of-the-art algorithms.


Parle: parallelizing stochastic gradient descent

no code implementations3 Jul 2017 Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar, Adam Oberman

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.

Deep Relaxation: partial differential equations for optimizing deep neural networks

no code implementations17 Apr 2017 Pratik Chaudhari, Adam Oberman, Stanley Osher, Stefano Soatto, Guillaume Carlier

In this paper we establish a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs).

Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

2 code implementations6 Nov 2016 Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann Lecun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, Riccardo Zecchina

This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape.

On the energy landscape of deep networks

no code implementations20 Nov 2015 Pratik Chaudhari, Stefano Soatto

Specifically, we show that a regularization term akin to a magnetic field can be modulated with a single scalar parameter to transition the loss function from a complex, non-convex landscape with exponentially many local minima, to a phase with a polynomial number of minima, all the way down to a trivial landscape with a unique minimum.

Cannot find the paper you are looking for? You can Submit a new open access paper.