1 code implementation • 30 Oct 2024 • Tassilo Wald, Constantin Ulrich, Gregor Köhler, David Zimmerer, Stefan Denner, Michael Baumgartner, Fabian Isensee, Priyank Jaini, Klaus H. Maier-Hein
In this paper, we propose to solve this through semantic RSMs, which are invariant to spatial permutation.
1 code implementation • 15 Aug 2024 • Robert Geirhos, Priyank Jaini, Austin Stone, Sourabh Medapati, Xi Yi, George Toderici, Abhijit Ogale, Jonathon Shlens
Training a neural network is a monolithic endeavor, akin to carving knowledge into stone: once the process is completed, editing the knowledge in a network is nearly impossible, since all information is distributed across the network's weights.
1 code implementation • 28 Sep 2023 • Priyank Jaini, Kevin Clark, Robert Geirhos
What is the best paradigm to recognize objects -- discriminative inference (fast but potentially prone to shortcut learning) or using a generative model (slow but potentially more robust)?
Ranked #1 on Object Recognition on shape bias
no code implementations • 27 Mar 2023 • Kevin Clark, Priyank Jaini
The key idea is using a diffusion model's ability to denoise a noised image given a text description of a label as a proxy for that label's likelihood.
1 code implementation • NeurIPS 2023 • Lars Holdijk, Yuanqi Du, Ferry Hooft, Priyank Jaini, Bernd Ensing, Max Welling
We consider the problem of sampling transition paths between two given metastable states of a molecular system, e. g. a folded and unfolded protein or products and reactants of a chemical reaction.
1 code implementation • 26 Nov 2021 • Kirill Neklyudov, Priyank Jaini, Max Welling
We accomplish this by viewing the evolution of the modeling distribution as (i) the evolution of the energy function, and (ii) the evolution of the samples from this distribution along some vector field.
no code implementations • NeurIPS 2021 • Priyank Jaini, Lars Holdijk, Max Welling
We focus on the problem of efficient sampling and learning of probability densities by incorporating symmetries in probabilistic models.
2 code implementations • NeurIPS 2021 • Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
Argmax Flows are defined by a composition of a continuous distribution (such as a normalizing flow), and an argmax function.
1 code implementation • 4 Feb 2021 • Priyank Jaini, Didrik Nielsen, Max Welling
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
no code implementations • pproximateinference AABI Symposium 2021 • Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
This paper introduces a new method to define and train continuous distributions such as normalizing flows directly on categorical data, for example text and image segmentation.
1 code implementation • 14 Nov 2020 • T. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forré, Max Welling
Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework.
3 code implementations • NeurIPS 2020 • Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling
Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions.
no code implementations • 8 Mar 2020 • Allen Houze Wang, Priyank Jaini, Yao-Liang Yu, Pascal Poupart
Recently, the conditional SAGE certificate has been proposed as a sufficient condition for signomial positivity over a convex set.
no code implementations • ICML 2020 • Priyank Jaini, Ivan Kobyzev, Yao-Liang Yu, Marcus Brubaker
We investigate the ability of popular flow based methods to capture tail-properties of a target density by studying the increasing triangular maps used in these flow methods acting on a tractable source density.
2 code implementations • 7 May 2019 • Priyank Jaini, Kira A. Selby, Yao-Liang Yu
Triangular map is a recent construct in probability theory that allows one to transform any source probability density function to any target density function.
no code implementations • NeurIPS 2018 • Priyank Jaini, Pascal Poupart, Yao-Liang Yu
At their core, many unsupervised learning models provide a compact representation of homogeneous density mixtures, but their similarities and differences are not always clearly understood.
no code implementations • 19 Sep 2016 • Priyank Jaini, Pascal Poupart
The Gaussian mixture model is a classic technique for clustering and data modeling that is used in numerous applications.