Search Results for author: Avishek Joey Bose

Found 18 papers, 12 papers with code

Iterated Denoising Energy Matching for Sampling from Boltzmann Densities

1 code implementation9 Feb 2024 Tara Akhound-Sadegh, Jarrid Rector-Brooks, Avishek Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong

Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-body systems, is a foundational problem in science.

Denoising Efficient Exploration

On the Stability of Iterative Retraining of Generative Models on their own Data

1 code implementation30 Sep 2023 Quentin Bertrand, Avishek Joey Bose, Alexandre Duplessis, Marco Jiralerspong, Gauthier Gidel

In this paper, we develop a framework to rigorously study the impact of training generative models on mixed datasets -- from classical training on real data to self-consuming generative models trained on purely synthetic data.

Feature Likelihood Divergence: Evaluating the Generalization of Generative Models Using Samples

1 code implementation NeurIPS 2023 Marco Jiralerspong, Avishek Joey Bose, Ian Gemp, Chongli Qin, Yoram Bachrach, Gauthier Gidel

The past few years have seen impressive progress in the development of deep generative models capable of producing high-dimensional, complex, and photo-realistic data.

Density Estimation

Riemannian Diffusion Models

no code implementations16 Aug 2022 Chin-wei Huang, Milad Aghajohari, Avishek Joey Bose, Prakash Panangaden, Aaron Courville

In this work, we generalize continuous-time diffusion models to arbitrary Riemannian manifolds and derive a variational framework for likelihood estimation.

Image Generation

Equivariant Finite Normalizing Flows

no code implementations16 Oct 2021 Avishek Joey Bose, Marcus Brubaker, Ivan Kobyzev

Generative modeling seeks to uncover the underlying factors that give rise to observed data that can often be modeled as the natural symmetries that manifest themselves through invariances and equivariances to certain transformation laws.

Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path Grounding

1 code implementation EMNLP 2021 Nouha Dziri, Andrea Madotto, Osmar Zaiane, Avishek Joey Bose

Dialogue systems powered by large pre-trained language models (LM) exhibit an innate ability to deliver fluent and natural-looking responses.

Hallucination

Online Adversarial Attacks

1 code implementation ICLR 2022 Andjela Mladenovic, Avishek Joey Bose, Hugo Berard, William L. Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel

Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream.

Adversarial Attack

Structure Aware Negative Sampling in Knowledge Graphs

no code implementations EMNLP 2020 Kian Ahrabian, Aarash Feizi, Yasmin Salehi, William L. Hamilton, Avishek Joey Bose

Learning low-dimensional representations for entities and relations in knowledge graphs using contrastive estimation represents a scalable and effective method for inferring connectivity patterns.

Contrastive Learning Knowledge Graphs

Adversarial Example Games

1 code implementation NeurIPS 2020 Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

We introduce Adversarial Example Games (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier.

Latent Variable Modelling with Hyperbolic Normalizing Flows

1 code implementation ICML 2020 Avishek Joey Bose, Ariella Smofsky, Renjie Liao, Prakash Panangaden, William L. Hamilton

One effective solution is the use of normalizing flows \cut{defined on Euclidean spaces} to construct flexible posterior distributions.

Density Estimation Variational Inference

Improving Exploration in Soft-Actor-Critic with Normalizing Flows Policies

1 code implementation6 Jun 2019 Patrick Nadeem Ward, Ariella Smofsky, Avishek Joey Bose

Deep Reinforcement Learning (DRL) algorithms for continuous action spaces are known to be brittle toward hyperparameters as well as \cut{being}sample inefficient.

Reinforcement Learning (RL)

Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling

no code implementations26 May 2019 Avishek Joey Bose, Andre Cianflone, William L. Hamilton

Adversarial attacks on deep neural networks traditionally rely on a constrained optimization paradigm, where an optimization procedure is used to obtain a single adversarial perturbation for a given input example.

Compositional Fairness Constraints for Graph Embeddings

1 code implementation25 May 2019 Avishek Joey Bose, William L. Hamilton

Learning high-quality node embeddings is a key building block for machine learning models that operate on graph data, such as social networks and recommender systems.

Fairness Graph Embedding +1

Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization

no code implementations31 May 2018 Avishek Joey Bose, Parham Aarabi

Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them.

Adversarial Attack Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.