1 code implementation • 18 Dec 2024 • Malay Pandey, Vaishali Jain, Nimit Godhani, Sachchida Nand Tripathi, Piyush Rai
One such example is forecasting the concentration of fine particulate matter (PM2. 5) in the atmosphere which is influenced by many complex factors, the most important ones being diffusion due to meteorological factors as well as transport across vast distances over a period of time.
no code implementations • 27 Nov 2024 • Shivam Pal, Aishwarya Gupta, Saqib Sarwar, Piyush Rai
Moreover, Bayesian FL also naturally enables personalization in FL to handle data heterogeneity across the different clients by having each client learn its own distinct personalized model.
no code implementations • 30 Aug 2024 • Avideep Mukherjee, Soumya Banerjee, Piyush Rai, Vinay P. Namboodiri
To this end, we design a retrieval-augmented generation (RAG) approach and leverage the corresponding blocks of the images retrieved by the RAG module to condition the training and generation stages of a block-wise denoising diffusion model.
no code implementations • 13 Aug 2024 • Aishwarya Gupta, Indranil Saha, Piyush Rai
We present a novel black-box coverage criterion called Co-Domain Coverage (CDC), which is defined as a function of the model's output and thus takes into account its end-to-end behavior.
no code implementations • 15 Sep 2023 • Soumya Banerjee, Vinay K. Verma, Avideep Mukherjee, Deepak Gupta, Vinay P. Namboodiri, Piyush Rai
Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning in a dynamic non-stationary environment without forgetting.
1 code implementation • CVPR 2023 • Dhanajit Brahma, Piyush Rai
Test-time adaptation (TTA) is the problem of updating a pre-trained source model at inference time given test input(s) from a different target domain.
no code implementations • 21 Jul 2022 • K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian
Inspired by this, we identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting, which tasks a machine learning model to incrementally discover novel categories of instances from unlabeled data, while maintaining its performance on the previously seen categories.
no code implementations • 15 Jun 2022 • Shrey Bhatt, Aishwarya Gupta, Piyush Rai
In many situations, however, especially in limited data settings, it is beneficial to take into account the uncertainty in the model parameters at each client as it leads to more accurate predictions and also because reliable estimates of uncertainty can be used for tasks, such as out-of-distribution (OOD) detection, and sequential decision-making tasks, such as active learning.
1 code implementation • 22 Apr 2022 • K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian
Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data, by utilizing labeled instances from a disjoint set of classes.
no code implementations • 18 Apr 2022 • Ankur Singh, Piyush Rai
The proposed semi-supervised technique can be used as a plug-and-play module with any supervised GAN-based Super-Resolution method to enhance its performance.
3 code implementations • 2 Jan 2022 • Kushagra Pandey, Avideep Mukherjee, Piyush Rai, Abhishek Kumar
Diffusion probabilistic models have been shown to generate state-of-the-art results on several competitive image synthesis benchmarks but lack a low-dimensional, interpretable latent space, and are slow at generation.
Ranked #20 on Image Generation on CelebA 64x64
no code implementations • 4 Dec 2021 • Ansh Khurana, Sujoy Paul, Piyush Rai, Soma Biswas, Gaurav Aggarwal
In Test-time Adaptation (TTA), given a source model, the goal is to adapt it to make better predictions for test instances from a different distribution than the source.
no code implementations • 7 Nov 2021 • Avinandan Bose, Aniket Das, Yatin Dandi, Piyush Rai
In this work, we propose a novel generative model that learns a flexible non-parametric prior over interpolation trajectories, conditioned on a pair of source and target images.
no code implementations • 5 Oct 2021 • Dhanajit Brahma, Vinay Kumar Verma, Piyush Rai
Further, we present $\textit{Semi-Split CIFAR-10}$, a new benchmark for continual semi-supervised learning, obtained by modifying the $\textit{Split CIFAR-10}$ dataset, in which the tasks with labelled and unlabelled data arrive sequentially.
no code implementations • NeurIPS Workshop DLDE 2021 • Avinandan Bose, Aniket Das, Yatin Dandi, Piyush Rai
A range of applications require learning image generation models whose latent space effectively captures the high-level factors of variation in the data distribution, which can be judged by its ability to interpolate between images smoothly.
1 code implementation • 26 Jul 2021 • Gargi Singh, Dhanajit Brahma, Piyush Rai, Ashutosh Modi
In this paper, we propose a new framework for fine-grained emotion prediction in the text through emotion definition modeling.
no code implementations • 12 Jun 2021 • Mohammed Asad Karim, Vinay Kumar Verma, Pravendra Singh, Vinay Namboodiri, Piyush Rai
In our approach, we learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting to accommodate future classes with limited samples.
no code implementations • CVPR 2021 • Pravendra Singh, Pratik Mazumder, Piyush Rai, Vinay P. Namboodiri
Our proposed method uses weight rectifications and affine transformations in order to adapt the model to different tasks that arrive sequentially.
no code implementations • 29 Mar 2021 • Rahul Sharma, Soumya Banerjee, Dootika Vats, Piyush Rai
We present a variational inference (VI) framework that unifies and leverages sequential Monte-Carlo (particle filtering) with \emph{approximate} rejection sampling to construct a flexible family of variational distributions.
1 code implementation • CVPR 2021 • Vinay Kumar Verma, Kevin J Liang, Nikhil Mehta, Piyush Rai, Lawrence Carin
However, the growth in the number of additional parameters of many of these types of methods can be computationally expensive at larger scales, at times prohibitively so.
no code implementations • NeurIPS 2021 • Sakshi Varshney, Vinay Kumar Verma, Srijith P K, Lawrence Carin, Piyush Rai
Our approach is based on learning a set of global and task-specific parameters.
no code implementations • 1 Mar 2021 • Pratik Mazumder, Pravendra Singh, Piyush Rai
Our method selects very few parameters from the model for training every new set of classes instead of training the full model.
no code implementations • 1 Jan 2021 • Saiteja Utpala, Piyush Rai
We provide a detailed formal analysis of the \emph{side-effects} of Isotonic Regression when used for regression calibration.
no code implementations • 1 Jan 2021 • Rahul Sharma, Soumya Banerjee, Dootika Vats, Piyush Rai
Effective variational inference crucially depends on a flexible variational family of distributions.
no code implementations • 1 Jan 2021 • Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
Two notable directions among the recent advances in continual learning with neural networks are (1) variational Bayes based regularization by learning priors from previous tasks, and, (2) learning the structure of deep networks to adapt to new tasks.
1 code implementation • NeurIPS 2020 • Pravendra Singh, Vinay Kumar Verma, Pratik Mazumder, Lawrence Carin, Piyush Rai
Further, our approach does not require storing data samples from the old tasks, which is done by many replay based methods.
no code implementations • 14 Nov 2020 • Vinay Kumar Verma, Ashish Mishra, Anubha Pandey, Hema A. Murthy, Piyush Rai
We present a meta-learning based generative model for zero-shot learning (ZSL) towards a challenging setting when the number of training examples from each \emph{seen} class is very few.
no code implementations • NeurIPS Workshop ICBINB 2020 • Saiteja Utpala, Piyush Rai
Deep learning models are often poorly calibrated, i. e., they may produce overconfident predictions that are wrong, implying that their uncertainty estimates are unreliable.
no code implementations • 15 Jun 2020 • Yatin Dandi, Homanga Bharadhwaj, Abhishek Kumar, Piyush Rai
Recent approaches, such as ALI and BiGAN frameworks, develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs.
1 code implementation • 18 May 2020 • Vivek Gupta, Ankit Saw, Pegah Nokhiz, Praneeth Netrapalli, Piyush Rai, Partha Talukdar
One of the key reasons is that a longer document is likely to contain words from many different topics; hence, creating a single vector while ignoring all the topical structure is unlikely to yield an effective document representation.
1 code implementation • 3 Apr 2020 • Arindam Sarkar, Nikhil Mehta, Piyush Rai
In addition to leveraging the representational power of multiple layers of stochastic variables via the ladder VAE architecture, our framework offers the following benefits: (1) Unlike existing ladder VAE architectures based on real-valued latent variables, the gamma-distributed latent variables naturally result in non-negativity and sparsity of the learned embeddings, and facilitate their direct interpretation as membership of nodes into (possibly multiple) communities/topics; (2) A novel recognition model for our gamma ladder VAE architecture allows fast inference of node embeddings; and (3) The framework also extends naturally to incorporate node side information (features and/or labels).
no code implementations • 28 Feb 2020 • Saiteja Utpala, Piyush Rai
It is therefore desirable to have models that produce predictive uncertainty estimates that are reliable.
no code implementations • 15 Jan 2020 • Vinay Kumar Verma, Pravendra Singh, Vinay P. Namboodiri, Piyush Rai
The pruner is essentially a multitask deep neural network with binary outputs that help identify the filters from each layer of the original network that do not have any significant contribution to the model and can therefore be pruned.
no code implementations • AAAI-2020 2019 • Pawan Kumar, Dhanajit Brahma, Harish Karnick, Piyush Rai
We apply our framework on two tasks: Sentence Ordering and Order Discrimination.
no code implementations • 17 Dec 2019 • Yatin Dandi, Aniket Das, Soumye Singhal, Vinay P. Namboodiri, Piyush Rai
The proposed model allows minor variations in content across frames while maintaining the temporal dependence through latent vectors encoding the pose or motion features.
no code implementations • 12 Dec 2019 • Karthikeyan K, Shubham Kumar Bharti, Piyush Rai
Despite the effectiveness of multitask deep neural network (MTDNN), there is a limited theoretical understanding on how the information is shared across different tasks in MTDNN.
1 code implementation • 8 Dec 2019 • Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
Two notable directions among the recent advances in continual learning with neural networks are ($i$) variational Bayes based regularization by learning priors from previous tasks, and, ($ii$) learning the structure of deep networks to adapt to new tasks.
no code implementations • ICLR 2020 • Wenlin Wang, Hongteng Xu, Ruiyi Zhang, Wenqi Wang, Piyush Rai, Lawrence Carin
To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback.
no code implementations • 17 Sep 2019 • Rahul Sharma, Abhishek Kumar, Piyush Rai
Our inference method is based on a crucial observation that $D_\infty(p||q)$ equals $\log M(\theta)$ where $M(\theta)$ is the optimal value of the RS constant for a given proposal $q_\theta(x)$.
1 code implementation • 10 Sep 2019 • Vinay Kumar Verma, Dhanajit Brahma, Piyush Rai
Our proposed model yields significant improvements on standard ZSL as well as more challenging GZSL setting.
1 code implementation • 7 Jun 2019 • Varun Khare, Divyat Mahajan, Homanga Bharadhwaj, Vinay Verma, Piyush Rai
Our approach is based on end-to-end learning of the class distributions of seen classes and unseen classes.
Ranked #1 on Zero-Shot Learning on CUB-200 - 0-Shot Learning (using extra training data)
no code implementations • Proceedings of the 36th International Conference on Machine Learning 2019 • Nikhil Mehta, Lawrence Carin, Piyush Rai
Although we develop this framework for a particular type of SBM, namely the \emph{overlapping} stochastic blockmodel, the proposed framework can be adapted readily for other types of SBMs.
1 code implementation • 11 May 2019 • Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri
Our framework, called Play and Prune (PP), jointly prunes and fine-tunes CNN model parameters, with an adaptive pruning rate, while maintaining the model's predictive performance.
1 code implementation • 2 May 2019 • He Zhao, Piyush Rai, Lan Du, Wray Buntine, Mingyuan Zhou
Many applications, such as text modelling, high-throughput sequencing, and recommender systems, require analysing sparse, high-dimensional, and overdispersed discrete (count-valued or binary) data.
no code implementations • 18 Apr 2019 • Vinay Kumar Verma, Aakansha Mishra, Ashish Mishra, Piyush Rai
We present a probabilistic model for Sketch-Based Image Retrieval (SBIR) where, at retrieval time, we are given sketches from novel classes, that were not present at training time.
no code implementations • 13 Apr 2019 • Rajat Panda, Ankit Pensia, Nikhil Mehta, Mingyuan Zhou, Piyush Rai
We present a probabilistic framework for multi-label learning based on a deep generative model for the binary label vector associated with each observation.
1 code implementation • CVPR 2019 • Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri
We present a novel deep learning architecture in which the convolution operation leverages heterogeneous kernels.
no code implementations • 26 Nov 2018 • Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri
We present a filter correlation based model compression approach for deep convolutional neural networks.
1 code implementation • ACL 2019 • Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya, Partha Talukdar
Word embeddings have been widely adopted across several NLP applications.
no code implementations • 10 Jul 2018 • Gundeep Arora, Anupreet Porwal, Kanupriya Agarwal, Avani Samdariya, Piyush Rai
The latent feature relational model (LFRM) is a generative model for graph-structured data to learn a binary vector representation for each node in the graph.
no code implementations • 27 Jan 2018 • Ashish Mishra, Vinay Kumar Verma, M Shiva Krishna Reddy, Arulkumar S, Piyush Rai, Anurag Mittal
In particular, we assume that the distribution parameters for any action class in the visual space can be expressed as a linear combination of a set of basis vectors where the combination weights are given by the attributes of the action class.
no code implementations • CVPR 2018 • Vinay Kumar Verma, Gundeep Arora, Ashish Mishra, Piyush Rai
Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings.
no code implementations • 15 Nov 2017 • Wenlin Wang, Yunchen Pu, Vinay Kumar Verma, Kai Fan, Yizhe Zhang, Changyou Chen, Piyush Rai, Lawrence Carin
We present a deep generative model for learning to predict classes not seen at training time.
no code implementations • 18 Sep 2017 • Rahul Wadbude, Vivek Gupta, Piyush Rai, Nagarajan Natarajan, Harish Karnick, Prateek Jain
Our approach is novel in that it highlights interesting connections between label embedding methods used for multi-label learning and paragraph/document embedding methods commonly used for learning representations of text data.
1 code implementation • 15 Sep 2017 • Ankush Gupta, Arvind Agarwal, Prawaan Singh, Piyush Rai
In this paper, we address the problem of generating paraphrases automatically.
no code implementations • ICML 2017 • Vikas Jain, Nirbhay Modhe, Piyush Rai
We present a scalable, generative framework for multi-label learning with missing labels.
2 code implementations • 25 Jul 2017 • Vinay Kumar Verma, Piyush Rai
We model each class-conditional distribution as an exponential family distribution and the parameters of the distribution of each seen/unseen class are defined as functions of the respective observed class attributes.
no code implementations • ICML 2017 • Changwei Hu, Piyush Rai, Lawrence Carin
Moreover, inference cost scales in the number of edges which is attractive for massive but sparse networks.
no code implementations • 14 Nov 2016 • Wenlin Wang, Changyou Chen, Wenqi Wang, Piyush Rai, Lawrence Carin
Unlike most existing methods for early classification of time series data, that are designed to solve this problem under the assumption of the availability of a good set of pre-defined (often hand-crafted) features, our framework can jointly perform feature learning (by learning a deep hierarchy of \emph{shapelets} capturing the salient characteristics in each time series), along with a dynamic truncation model to help our deep feature learning architecture focus on the early parts of each time series.
no code implementations • NeurIPS 2015 • Piyush Rai, Changwei Hu, Ricardo Henao, Lawrence Carin
We present a scalable Bayesian multi-label learning model based on learning low-dimensional label embeddings.
no code implementations • 18 Aug 2015 • Changwei Hu, Piyush Rai, Changyou Chen, Matthew Harding, Lawrence Carin
We present a Bayesian non-negative tensor factorization model for count-valued tensor data, and develop scalable inference algorithms (both batch and online) for dealing with massive tensors.
no code implementations • 18 Aug 2015 • Changwei Hu, Piyush Rai, Lawrence Carin
We present a scalable Bayesian model for low-rank factorization of massive tensors with binary observations.
no code implementations • NeurIPS 2012 • Piyush Rai, Abhishek Kumar, Hal Daume
In this paper, we present a multiple-output regression model that leverages the covariance structure of the functions (i. e., how the multiple functions are related with each other) as well as the conditional covariance structure of the outputs.
no code implementations • NeurIPS 2011 • Abhishek Kumar, Piyush Rai, Hal Daume
In many clustering problems, we have access to multiple views of the data each of which could be individually used for clustering.
no code implementations • NeurIPS 2011 • Jiarong Jiang, Piyush Rai, Hal Daume
We consider a general inference setting for discrete probabilistic graphical models where we seek maximum a posteriori (MAP) estimates for a subset of the random variables (max nodes), marginalizing over the rest (sum nodes).
no code implementations • NeurIPS 2009 • Piyush Rai, Hal Daume
Canonical Correlation Analysis (CCA) is a useful technique for modeling dependencies between two (or more) sets of variables.