Search Results for author: Saeed Saremi

Found 19 papers, 6 papers with code

JAMUN: Transferable Molecular Conformational Ensemble Generation with Walk-Jump Sampling

no code implementations18 Oct 2024 Ameya Daigavane, Bodhi P. Vani, Saeed Saremi, Joseph Kleinhenz, Joshua Rackers

Conformational ensembles of protein structures are immensely important both to understanding protein function, and for drug discovery in novel modalities such as cryptic pockets.

Drug Discovery

Structure-based drug design by denoising voxel grids

1 code implementation7 May 2024 Pedro O. Pinheiro, Arian Jamasb, Omar Mahmood, Vishnu Sresht, Saeed Saremi

We present VoxBind, a new score-based generative model for 3D molecules conditioned on protein structures.

Denoising

Protein Discovery with Discrete Walk-Jump Sampling

1 code implementation8 Jun 2023 Nathan C. Frey, Daniel Berenberg, Karina Zadorozhny, Joseph Kleinhenz, Julien Lafrance-Vanasse, Isidro Hotzel, Yan Wu, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, Saeed Saremi

We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising.

Denoising

Chain of Log-Concave Markov Chains

no code implementations31 May 2023 Saeed Saremi, Ji Won Park, Francis Bach

We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale.

Universal Smoothed Score Functions for Generative Modeling

no code implementations21 Mar 2023 Saeed Saremi, Rupesh Kumar Srivastava, Francis Bach

We consider the problem of generative modeling based on smoothing an unknown density of interest in $\mathbb{R}^d$ using factorial kernels with $M$ independent Gaussian channels with equal noise levels introduced by Saremi and Srivastava (2022).

Multimeasurement Generative Models

1 code implementation ICLR 2022 Saeed Saremi, Rupesh Kumar Srivastava

We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density is referred to as M-density and the kernel as multimeasurement noise model (MNM).

Denoising

Automatic design of novel potential 3CL$^{\text{pro}}$ and PL$^{\text{pro}}$ inhibitors

no code implementations28 Jan 2021 Timothy Atkinson, Saeed Saremi, Faustino Gomez, Jonathan Masci

With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, Molecular Neural Assay Search (MONAS), consisting of three components: a property predictor which identifies molecules with specific desirable properties, an energy model which approximates the statistical similarity of a given molecule to known training molecules, and a molecule search method.

Unnormalized Variational Bayes

no code implementations29 Jul 2020 Saeed Saremi

This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable $Y=X+N(0,\sigma^2 I_d)$ and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of $Y$ which is then used to estimate $X$ with the empirical Bayes least-squares estimator.

Decoder Denoising

Learning and Inference in Imaginary Noise Models

no code implementations18 May 2020 Saeed Saremi

This is the concept of imaginary noise model, where the noise model dictates the functional form of the variational lower bound $\mathcal{L}(\sigma)$, but the noisy data are never seen during learning.

Decoder Variational Inference

Provable Robust Classification via Learned Smoothed Densities

no code implementations9 May 2020 Saeed Saremi, Rupesh Srivastava

We test the theory on MNIST and we show that with a learned smoothed energy function and a linear classifier we can achieve provable $\ell_2$ robust accuracies that are competitive with empirical defenses.

Classification General Classification +1

No Representation without Transformation

no code implementations9 Dec 2019 Giorgio Giannone, Saeed Saremi, Jonathan Masci, Christian Osendorfer

To explicitly demonstrate the effect of these higher order objects, we show that the inferred latent transformations reflect interpretable properties in the observation space.

On approximating $\nabla f$ with neural networks

no code implementations28 Oct 2019 Saeed Saremi

Consider a feedforward neural network $\psi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $\psi\approx \nabla f$, where $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth function, therefore $\psi$ must satisfy $\partial_j \psi_i = \partial_i \psi_j$ pointwise.

Denoising

Neural Empirical Bayes

no code implementations6 Mar 2019 Saeed Saremi, Aapo Hyvarinen

Kernel density is viewed symbolically as $X\rightharpoonup Y$ where the random variable $X$ is smoothed to $Y= X+N(0,\sigma^2 I_d)$, and empirical Bayes is the machinery to denoise in a least-squares sense, which we express as $X \leftharpoondown Y$.

Denoising Density Estimation

Deep Energy Estimator Networks

1 code implementation21 May 2018 Saeed Saremi, Arash Mehrjou, Bernhard Schölkopf, Aapo Hyvärinen

We present the utility of DEEN in learning the energy, the score function, and in single-step denoising experiments for synthetic and high-dimensional data.

Denoising Density Estimation

Annealed Generative Adversarial Networks

no code implementations21 May 2017 Arash Mehrjou, Bernhard Schölkopf, Saeed Saremi

We introduce a novel framework for adversarial training where the target distribution is annealed between the uniform distribution and the data distribution.

The Wilson Machine for Image Modeling

no code implementations27 Oct 2015 Saeed Saremi, Terrence J. Sejnowski

We turn this representation into a directed probabilistic graphical model, transforming the learning problem into the unsupervised learning of the distribution of the critical bitplane and the supervised learning of the conditional distributions for the remaining bitplanes.

Cannot find the paper you are looking for? You can Submit a new open access paper.