no code implementations • ICML 2020 • Abhishek Kumar, Ben Poole
While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood.
no code implementations • 14 Jun 2022 • Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, Yang Liu
The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e. g., via crowdsourcing).
no code implementations • CVPR 2022 • Abhishek Kumar, Oladayo S. Ajani, Swagatam Das, Rammohan Mallipeddi
To address this issue, we propose a mode-seeking algorithm called GridShift, with significant speedup and principally based on MS. To accelerate, GridShift employs a grid-based approach for neighbor search, which is linear in the number of data points.
1 code implementation • 2 Jan 2022 • Kushagra Pandey, Avideep Mukherjee, Piyush Rai, Abhishek Kumar
Furthermore, we show that the proposed model can generate high-resolution samples and exhibits synthesis quality comparable to state-of-the-art models on standard benchmarks.
Ranked #6 on
Image Generation
on CelebA 64x64
no code implementations • 16 Dec 2021 • Giannis Daras, Wen-Sheng Chu, Abhishek Kumar, Dmitry Lagun, Alexandros G. Dimakis
We introduce a novel framework for solving inverse problems using NeRF-style generative models.
no code implementations • 26 Nov 2021 • Lik-Hang Lee, Zijun Lin, Rui Hu, Zhengya Gong, Abhishek Kumar, Tangyao Li, Sijia Li, Pan Hui
The metaverse, enormous virtual-physical cyberspace, has brought unprecedented opportunities for artists to blend every corner of our physical surroundings with digital creativity.
no code implementations • 9 Nov 2021 • Abhishek Kumar, Ehsan Amid
However, their performance is largely dependent on the quality of the training data and often degrades in the presence of noise.
no code implementations • 25 Oct 2021 • Young D. Kwon, Jagmohan Chauhan, Abhishek Kumar, Pan Hui, Cecilia Mascolo
Our findings suggest that replay with exemplars-based schemes such as iCaRL has the best performance trade-offs, even in complex scenarios, at the expense of some storage space (few MBs) for training examples (1% to 5%).
no code implementations • ACM Transactions on Management Information Systems 2021 • Ankit Kumar, Abhishek Kumar, Ali Kashif Bashir, MAMOON RASHID, V. D. AMBETH KUMAR, Rupak Kharel
Detection of outliers or anomalies is one of the vital issues in pattern-driven data mining.
2 code implementations • 23 Jul 2021 • Abhishek Kumar, Harikrishna Narasimhan, Andrew Cotter
We consider a popular family of constrained optimization problems arising in machine learning that involve optimizing a non-decomposable evaluation metric with a certain thresholded form, while constraining another metric of interest.
no code implementations • 23 Jul 2021 • Dhruv Jawali, Abhishek Kumar, Chandra Sekhar Seelamantula
Wavelets have proven to be highly successful in several signal and image processing applications.
1 code implementation • ICCV 2021 • Min Jin Chong, Wen-Sheng Chu, Abhishek Kumar, David Forsyth
We present Retrieve in Style (RIS), an unsupervised framework for facial feature transfer and retrieval on real images.
no code implementations • Expert Systems with Applications 2021 • Abhishek Kumar, Syahrir Ridha, Narahari Marneni, Suhaib Umer Ilyas
The uncertainty in fluid consistency index is responsible for higher variance in the calculated flow rate, while the least variation is observed due to fluid behavior index uncertainty.
1 code implementation • 17 Apr 2021 • Kevin Murphy, Abhishek Kumar, Stylianos Serghiou
Although this data is already being collected (in an aggregated, privacy-preserving way) by several health authorities, in this paper we limit ourselves to simulated data, so that we can systematically study the different factors that affect the feasibility of the approach.
no code implementations • 1 Jan 2021 • Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
Two notable directions among the recent advances in continual learning with neural networks are (1) variational Bayes based regularization by learning priors from previous tasks, and, (2) learning the structure of deep networks to adapt to new tasks.
no code implementations • 17 Dec 2020 • Abhishek Kumar, Gerardo Ortiz, Philip Richerme, Babak Seradjeh
The dynamically generated magnetization current depends on the phases of complex coupling terms, with the XY interaction as the real and DMI as the imaginary part.
Quantum Gases Mesoscale and Nanoscale Physics Superconductivity
no code implementations • 3 Dec 2020 • Abhishek Kumar, Colin Benjamin
In this paper, we probe the topological phase transition of an FTI via the efficiency and work output of quantum Otto and quantum Stirling heat engines.
Mesoscale and Nanoscale Physics Applied Physics Quantum Physics
8 code implementations • ICLR 2021 • Yang song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole
Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9. 89 and FID of 2. 20, a competitive likelihood of 2. 99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
Ranked #5 on
Image Generation
on CIFAR-10
1 code implementation • 18 Nov 2020 • Abhishek Kumar, Gunjan Verma, Chirag Rao, Ananthram Swami, Santiago Segarra
We study the problem of adaptive contention window (CW) design for random-access wireless networks.
1 code implementation • 22 Oct 2020 • Esther Robb, Wen-Sheng Chu, Abhishek Kumar, Jia-Bin Huang
We validate our method in a challenging few-shot setting of 5-100 images in the target domain.
no code implementations • 15 Jun 2020 • Yatin Dandi, Homanga Bharadhwaj, Abhishek Kumar, Piyush Rai
Recent approaches, such as ALI and BiGAN frameworks, develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs.
no code implementations • 25 Apr 2020 • Abhishek Kumar, Trisha Mittal, Dinesh Manocha
We present MCQA, a learning-based algorithm for multimodal question answering.
no code implementations • 5 Apr 2020 • Ambrish Kumar Srivastava, Abhishek Kumar, Neeraj Misra
This study aims to assess the Indian herbal plants in the pursuit of potential COVID-19 inhibitors using in silico approaches.
no code implementations • 3 Mar 2020 • Abhishek Kumar, Benjamin Finley, Tristan Braud, Sasu Tarkoma, Pan Hui
Artificial intelligence shows promise for solving many practical societal problems in areas such as healthcare and transportation.
no code implementations • 20 Feb 2020 • Abhishek Kumar, Ben Poole, Kevin Murphy
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference.
no code implementations • 31 Jan 2020 • Abhishek Kumar, Ben Poole
While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood.
1 code implementation • 8 Dec 2019 • Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
Two notable directions among the recent advances in continual learning with neural networks are ($i$) variational Bayes based regularization by learning priors from previous tasks, and, ($ii$) learning the structure of deep networks to adapt to new tasks.
no code implementations • 28 Nov 2019 • Abhishek Kumar, Asif Ekbal, Daisuke Kawahra, Sadao Kurohashi
Our network also boosts the performance of emotion analysis by 5 F-score points on Stance Sentiment Emotion Corpus.
no code implementations • 11 Nov 2019 • Vishal Anand, Ravi Shukla, Ashwani Gupta, Abhishek Kumar
But with a huge surge in content being posted online it becomes seemingly difficult to filter out related videos on which they can run their ads without compromising brand name.
1 code implementation • ICLR 2020 • Rui Shu, Yining Chen, Abhishek Kumar, Stefano Ermon, Ben Poole
Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning.
no code implementations • 17 Sep 2019 • Rahul Sharma, Abhishek Kumar, Piyush Rai
Our inference method is based on a crucial observation that $D_\infty(p||q)$ equals $\log M(\theta)$ where $M(\theta)$ is the optimal value of the RS constant for a given proposal $q_\theta(x)$.
no code implementations • JEPTALNRECITAL 2019 • Patricia Chiril, Farah Benamara Zitoune, V{\'e}ronique Moriceau, Marl{\`e}ne Coulomb-Gully, Abhishek Kumar
Social media networks have become a space where users are free to relate their opinions and sentiments which may lead to a large spreading of hatred or abusive messages which have to be moderated.
no code implementations • SEMEVAL 2019 • Patricia Chiril, Farah Benamara Zitoune, V{\'e}ronique Moriceau, Abhishek Kumar
The massive growth of user-generated web content through blogs, online forums and most notably, social media networks, led to a large spreading of hatred or abusive messages which have to be moderated.
no code implementations • WS 2019 • Md. Shad Akhtar, Abhishek Kumar, Asif Ekbal, Chris Biemann, Pushpak Bhattacharyya
In this paper, we propose a language-agnostic deep neural network architecture for aspect-based sentiment analysis.
no code implementations • 6 Feb 2019 • Akshay Rangamani, Nam H. Nguyen, Abhishek Kumar, Dzung Phan, Sang H. Chin, Trac. D. Tran
It has been empirically observed that the flatness of minima obtained from training deep networks seems to correlate with better generalization.
no code implementations • 30 Nov 2018 • Vidya Muthukumar, Tejaswini Pedapati, Nalini Ratha, Prasanna Sattigeri, Chai-Wah Wu, Brian Kingsbury, Abhishek Kumar, Samuel Thomas, Aleksandra Mojsilovic, Kush R. Varshney
Recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender.
3 code implementations • CVPR 2019 • Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, Rogerio Feris
Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision.
1 code implementation • 14 Nov 2018 • Soumya Sanyal, Janakiraman Balachandran, Naganand Yadati, Abhishek Kumar, Padmini Rajagopalan, Suchismita Sanyal, Partha Talukdar
Some of the major challenges involved in developing such models are, (i) limited availability of materials data as compared to other fields, (ii) lack of universal descriptor of materials to predict its various properties.
Ranked #2 on
Band Gap
on Materials Project
no code implementations • NeurIPS 2018 • Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, William T. Freeman, Gregory Wornell
Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}.
1 code implementation • NeurIPS 2018 • Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Rogerio Feris, Abhishek Kumar, Raja Giryes, Alex M. Bronstein
Our approach is based on a modified auto-encoder, denoted Delta-encoder, that learns to synthesize new samples for an unseen category just by seeing few examples from it.
no code implementations • NAACL 2018 • Abhishek Kumar, Daisuke Kawahara, Sadao Kurohashi
We propose a novel two-layered attention network based on Bidirectional Long Short-Term Memory for sentiment analysis.
no code implementations • ICLR 2018 • Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng
Many practical reinforcement learning problems contain catastrophic states that the optimal policy visits infrequently or never.
no code implementations • 26 Nov 2017 • Igor Melnyk, Cicero Nogueira dos santos, Kahini Wadhawan, Inkit Padhi, Abhishek Kumar
Text attribute transfer using non-parallel data requires methods that can perform disentanglement of content and linguistic attributes.
1 code implementation • CVPR 2018 • Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications.
no code implementations • 21 Nov 2017 • Hang Shao, Abhishek Kumar, P. Thomas Fletcher
Deep generative models learn a mapping from a low dimensional latent space to a high-dimensional data space.
1 code implementation • ICLR 2018 • Abhishek Kumar, Prasanna Sattigeri, Avinash Balakrishnan
Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc.
no code implementations • EMNLP 2017 • Md. Shad Akhtar, Abhishek Kumar, Deepanway Ghosal, Asif Ekbal, Pushpak Bhattacharyya
In this paper, we propose a novel method for combining deep learning and classical feature based models using a Multi-Layer Perceptron (MLP) network for financial sentiment analysis.
no code implementations • SEMEVAL 2017 • Abhishek Kumar, Abhishek Sethi, Md. Shad Akhtar, Asif Ekbal, Chris Biemann, Pushpak Bhattacharyya
The other system was based on Support Vector Regression using word embeddings, lexicon features, and PMI scores as features.
no code implementations • 1 Aug 2017 • Ramesh Nallapati, Igor Melnyk, Abhishek Kumar, Bo-Wen Zhou
We present a new topic model that generates documents by sampling a topic for one whole sentence at a time, and generating the words in the sentence using an RNN decoder that is conditioned on the topic of the sentence.
no code implementations • NeurIPS 2017 • Abhishek Kumar, Prasanna Sattigeri, P. Thomas Fletcher
Semi-supervised learning methods using Generative Adversarial Networks (GANs) have shown promising empirical success recently.
no code implementations • 6 Dec 2016 • Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Schölkopf
We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations.
1 code implementation • CVPR 2017 • Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, Rogerio Feris
Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them.
4 code implementations • CVPR 2017 • Shuangfei Zhai, Hui Wu, Abhishek Kumar, Yu Cheng, Yongxi Lu, Zhongfei Zhang, Rogerio Feris
We view the pooling operation in CNNs as a two-step procedure: first, a pooling window (e. g., $2\times 2$) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e. g., top-left) manner.
no code implementations • 3 Nov 2016 • Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng
We introduce intrinsic fear (IF), a learned reward shaping that guards DRL agents against periodic catastrophes.
no code implementations • 19 Jun 2015 • Abhishek Kumar, Suresh Chandra Gupta
Cluster analysis is one of the primary data analysis technique in data mining and K-means is one of the commonly used partitioning clustering algorithm.
no code implementations • 27 Oct 2014 • Nicolas Gillis, Abhishek Kumar
Second, we propose an exact algorithm (that is, an algorithm that finds an optimal solution), also based on the SVD, for a certain class of matrices (including nonnegative irreducible matrices) from which we derive an initialization for matrices not belonging to that class.
no code implementations • 27 Dec 2013 • Abhishek Kumar, Vikas Sindhwani
Recently, a family of tractable NMF algorithms have been proposed under the assumption that the data matrix satisfies a separability condition Donoho & Stodden (2003); Arora et al. (2012).
no code implementations • NeurIPS 2012 • Piyush Rai, Abhishek Kumar, Hal Daume
In this paper, we present a multiple-output regression model that leverages the covariance structure of the functions (i. e., how the multiple functions are related with each other) as well as the conditional covariance structure of the outputs.
no code implementations • 27 Jun 2012 • Abhishek Kumar, Hal Daume III
In the paradigm of multi-task learning, mul- tiple related prediction tasks are learned jointly, sharing information across the tasks.
no code implementations • NeurIPS 2011 • Abhishek Kumar, Piyush Rai, Hal Daume
In many clustering problems, we have access to multiple views of the data each of which could be individually used for clustering.
no code implementations • NeurIPS 2010 • Abhishek Kumar, Avishek Saha, Hal Daume
This paper presents a co-regularization based approach to semi-supervised domain adaptation.