1 code implementation • 25 Jun 2018 • Rajat Sen, Karthikeyan Shanmugam, Himanshu Asnani, Arman Rahimzamani, Sreeram Kannan
Given independent samples generated from the joint distribution $p(\mathbf{x},\mathbf{y},\mathbf{z})$, we study the problem of Conditional Independence (CI-Testing), i. e., whether the joint equals the CI distribution $p^{CI}(\mathbf{x},\mathbf{y},\mathbf{z})= p(\mathbf{z}) p(\mathbf{y}|\mathbf{z})p(\mathbf{x}|\mathbf{z})$ or not.
7 code implementations • 10 Sep 2018 • Sudipto Mukherjee, Himanshu Asnani, Eugene Lin, Sreeram Kannan
While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space.
1 code implementation • 30 Nov 2018 • Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath
Designing channel codes under low-latency constraints is one of the most demanding requirements in 5G standards.
1 code implementation • 6 Mar 2019 • Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath
We focus on Turbo codes and propose DeepTurbo, a novel deep learning based architecture for Turbo decoding.
1 code implementation • 5 Jun 2019 • Sudipto Mukherjee, Himanshu Asnani, Sreeram Kannan
Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z.
1 code implementation • NeurIPS 2019 • Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath
Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications.
no code implementations • 10 Dec 2019 • Arnab Kumar Mondal, Sankalan Pal Chowdhury, Aravind Jayendran, Parag Singla, Himanshu Asnani, Prathosh AP
The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse.
no code implementations • 17 May 2020 • Arnab Kumar Mondal, Arnab Bhattacharya, Sudipto Mukherjee, Prathosh AP, Sreeram Kannan, Himanshu Asnani
Estimation of information theoretic quantities such as mutual information and its conditional variant has drawn interest in recent times owing to their multifaceted applications.
no code implementations • 10 Jun 2020 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
Specifically, we consider the class of RAEs with deterministic Encoder-Decoder pairs, Wasserstein Auto-Encoders (WAE), and show that having a fixed prior distribution, \textit{a priori}, oblivious to the dimensionality of the `true' latent space, will lead to the infeasibility of the optimization problem considered.
no code implementations • 27 Jun 2020 • Rishabh Iyer, Ninad Khargonkar, Jeff Bilmes, Himanshu Asnani
In this paper, we study combinatorial information measures that generalize independence, (conditional) entropy, (conditional) mutual information, and total correlation defined over sets of (not necessarily random) variables.
no code implementations • 12 Oct 2020 • Vishal Kaushal, Suraj Kothawade, Ganesh Ramakrishnan, Jeff Bilmes, Himanshu Asnani, Rishabh Iyer
We study submodular information measures as a rich framework for generic, query-focused, privacy sensitive, and update summarization tasks.
1 code implementation • 16 Jul 2021 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
The basic idea in RAEs is to learn a non-linear mapping from the high-dimensional data space to a low-dimensional latent space and vice-versa, simultaneously imposing a distributional prior on the latent space, which brings in a regularization effect.