Search Results for author: Sahil Sidheekh

Found 8 papers, 3 papers with code

Building Expressive and Tractable Probabilistic Generative Models: A Review

no code implementations1 Feb 2024 Sahil Sidheekh, Sriraam Natarajan

We present a comprehensive survey of the advancements and techniques in the field of tractable probabilistic generative modeling, primarily focusing on Probabilistic Circuits (PCs).

VQ-Flows: Vector Quantized Local Normalizing Flows

no code implementations22 Mar 2022 Sahil Sidheekh, Chris B. Dock, Tushar Jain, Radu Balan, Maneesh K. Singh

Normalizing flows provide an elegant approach to generative modeling that allows for efficient sampling and exact density evaluation of unknown data distributions.

Task Attended Meta-Learning for Few-Shot Learning

no code implementations20 Jun 2021 Aroof Aimen, Sahil Sidheekh, Narayanan C. Krishnan

The popular approaches for ML either learn a generalizable initial model or a generic parametric optimizer through episodic training.

Few-Shot Learning

On Characterizing GAN Convergence Through Proximal Duality Gap

1 code implementation11 May 2021 Sahil Sidheekh, Aroof Aimen, Narayanan C. Krishnan

Finally, we validate experimentally the usefulness of proximal duality gap for monitoring and influencing GAN training.

Stress Testing of Meta-learning Approaches for Few-shot Learning

no code implementations21 Jan 2021 Aroof Aimen, Sahil Sidheekh, Vineet Madan, Narayanan C. Krishnan

Our results show a quick degradation in the performance of initialization strategies for ML (MAML, TAML, and MetaSGD), while surprisingly, approaches that use an optimization strategy (MetaLSTM) perform significantly better.

Few-Shot Learning

Learning Neural Networks on SVD Boosted Latent Spaces for Semantic Classification

1 code implementation3 Jan 2021 Sahil Sidheekh

The availability of large amounts of data and compelling computation power have made deep learning models much popular for text classification and sentiment analysis.

General Classification Sentiment Analysis +2

On Duality Gap as a Measure for Monitoring GAN Training

1 code implementation12 Dec 2020 Sahil Sidheekh, Aroof Aimen, Vineet Madan, Narayanan C. Krishnan

Further, we show that our estimate, with its ability to identify model convergence/divergence, is a potential performance measure that can be used to tune the hyperparameters of a GAN.

Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.