Search Results for author: Himanshu Asnani

Found 12 papers, 7 papers with code

ClusterGAN : Latent Space Clustering in Generative Adversarial Networks

7 code implementations10 Sep 2018 Sudipto Mukherjee, Himanshu Asnani, Eugene Lin, Sreeram Kannan

While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space.

Clustering

Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels

1 code implementation NeurIPS 2019 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications.

DeepTurbo: Deep Turbo Decoder

1 code implementation6 Mar 2019 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We focus on Turbo codes and propose DeepTurbo, a novel deep learning based architecture for Turbo decoding.

CCMI : Classifier based Conditional Mutual Information Estimation

1 code implementation5 Jun 2019 Sudipto Mukherjee, Himanshu Asnani, Sreeram Kannan

Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z.

feature selection Mutual Information Estimation +2

Mimic and Classify : A meta-algorithm for Conditional Independence Testing

1 code implementation25 Jun 2018 Rajat Sen, Karthikeyan Shanmugam, Himanshu Asnani, Arman Rahimzamani, Sreeram Kannan

Given independent samples generated from the joint distribution $p(\mathbf{x},\mathbf{y},\mathbf{z})$, we study the problem of Conditional Independence (CI-Testing), i. e., whether the joint equals the CI distribution $p^{CI}(\mathbf{x},\mathbf{y},\mathbf{z})= p(\mathbf{z}) p(\mathbf{y}|\mathbf{z})p(\mathbf{x}|\mathbf{z})$ or not.

ScRAE: Deterministic Regularized Autoencoders with Flexible Priors for Clustering Single-cell Gene Expression Data

1 code implementation16 Jul 2021 Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP

The basic idea in RAEs is to learn a non-linear mapping from the high-dimensional data space to a low-dimensional latent space and vice-versa, simultaneously imposing a distributional prior on the latent space, which brings in a regularization effect.

Clustering

LEARN Codes: Inventing Low-latency Codes via Recurrent Neural Networks

1 code implementation30 Nov 2018 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Designing channel codes under low-latency constraints is one of the most demanding requirements in 5G standards.

MaskAAE: Latent space optimization for Adversarial Auto-Encoders

no code implementations10 Dec 2019 Arnab Kumar Mondal, Sankalan Pal Chowdhury, Aravind Jayendran, Parag Singla, Himanshu Asnani, Prathosh AP

The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse.

C-MI-GAN : Estimation of Conditional Mutual Information using MinMax formulation

no code implementations17 May 2020 Arnab Kumar Mondal, Arnab Bhattacharya, Sudipto Mukherjee, Prathosh AP, Sreeram Kannan, Himanshu Asnani

Estimation of information theoretic quantities such as mutual information and its conditional variant has drawn interest in recent times owing to their multifaceted applications.

To Regularize or Not To Regularize? The Bias Variance Trade-off in Regularized AEs

no code implementations10 Jun 2020 Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP

Specifically, we consider the class of RAEs with deterministic Encoder-Decoder pairs, Wasserstein Auto-Encoders (WAE), and show that having a fixed prior distribution, \textit{a priori}, oblivious to the dimensionality of the `true' latent space, will lead to the infeasibility of the optimization problem considered.

Submodular Combinatorial Information Measures with Applications in Machine Learning

no code implementations27 Jun 2020 Rishabh Iyer, Ninad Khargonkar, Jeff Bilmes, Himanshu Asnani

In this paper, we study combinatorial information measures that generalize independence, (conditional) entropy, (conditional) mutual information, and total correlation defined over sets of (not necessarily random) variables.

BIG-bench Machine Learning Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.