no code implementations • 18 Oct 2024 • Ameya Daigavane, Bodhi P. Vani, Saeed Saremi, Joseph Kleinhenz, Joshua Rackers
Conformational ensembles of protein structures are immensely important both to understanding protein function, and for drug discovery in novel modalities such as cryptic pockets.
1 code implementation • 3 Jul 2024 • Ewa M. Nowara, Pedro O. Pinheiro, Sai Pooja Mahajan, Omar Mahmood, Andrew Martin Watkins, Saeed Saremi, Michael Maser
We present NEBULA, the first latent 3D generative model for scalable generation of large molecular libraries around a seed compound of interest.
1 code implementation • 7 May 2024 • Pedro O. Pinheiro, Arian Jamasb, Omar Mahmood, Vishnu Sresht, Saeed Saremi
We present VoxBind, a new score-based generative model for 3D molecules conditioned on protein structures.
1 code implementation • NeurIPS 2023 • Pedro O. Pinheiro, Joshua Rackers, Joseph Kleinhenz, Michael Maser, Omar Mahmood, Andrew Martin Watkins, Stephen Ra, Vishnu Sresht, Saeed Saremi
We propose a new score-based approach to generate 3D molecules represented as atomic densities on regular grids.
1 code implementation • 8 Jun 2023 • Nathan C. Frey, Daniel Berenberg, Karina Zadorozhny, Joseph Kleinhenz, Julien Lafrance-Vanasse, Isidro Hotzel, Yan Wu, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, Saeed Saremi
We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising.
no code implementations • 31 May 2023 • Saeed Saremi, Ji Won Park, Francis Bach
We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale.
no code implementations • 21 Mar 2023 • Saeed Saremi, Rupesh Kumar Srivastava, Francis Bach
We consider the problem of generative modeling based on smoothing an unknown density of interest in $\mathbb{R}^d$ using factorial kernels with $M$ independent Gaussian channels with equal noise levels introduced by Saremi and Srivastava (2022).
no code implementations • 8 Oct 2022 • Ji Won Park, Samuel Stanton, Saeed Saremi, Andrew Watkins, Henri Dwyer, Vladimir Gligorijevic, Richard Bonneau, Stephen Ra, Kyunghyun Cho
Bayesian optimization offers a sample-efficient framework for navigating the exploration-exploitation trade-off in the vast design space of biological sequences.
1 code implementation • ICLR 2022 • Saeed Saremi, Rupesh Kumar Srivastava
We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density is referred to as M-density and the kernel as multimeasurement noise model (MNM).
no code implementations • 28 Jan 2021 • Timothy Atkinson, Saeed Saremi, Faustino Gomez, Jonathan Masci
With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, Molecular Neural Assay Search (MONAS), consisting of three components: a property predictor which identifies molecules with specific desirable properties, an energy model which approximates the statistical similarity of a given molecule to known training molecules, and a molecule search method.
no code implementations • 29 Jul 2020 • Saeed Saremi
This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable $Y=X+N(0,\sigma^2 I_d)$ and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of $Y$ which is then used to estimate $X$ with the empirical Bayes least-squares estimator.
no code implementations • 18 May 2020 • Saeed Saremi
This is the concept of imaginary noise model, where the noise model dictates the functional form of the variational lower bound $\mathcal{L}(\sigma)$, but the noisy data are never seen during learning.
no code implementations • 9 May 2020 • Saeed Saremi, Rupesh Srivastava
We test the theory on MNIST and we show that with a learned smoothed energy function and a linear classifier we can achieve provable $\ell_2$ robust accuracies that are competitive with empirical defenses.
no code implementations • 9 Dec 2019 • Giorgio Giannone, Saeed Saremi, Jonathan Masci, Christian Osendorfer
To explicitly demonstrate the effect of these higher order objects, we show that the inferred latent transformations reflect interpretable properties in the observation space.
no code implementations • 28 Oct 2019 • Saeed Saremi
Consider a feedforward neural network $\psi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $\psi\approx \nabla f$, where $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth function, therefore $\psi$ must satisfy $\partial_j \psi_i = \partial_i \psi_j$ pointwise.
no code implementations • 6 Mar 2019 • Saeed Saremi, Aapo Hyvarinen
Kernel density is viewed symbolically as $X\rightharpoonup Y$ where the random variable $X$ is smoothed to $Y= X+N(0,\sigma^2 I_d)$, and empirical Bayes is the machinery to denoise in a least-squares sense, which we express as $X \leftharpoondown Y$.
1 code implementation • 21 May 2018 • Saeed Saremi, Arash Mehrjou, Bernhard Schölkopf, Aapo Hyvärinen
We present the utility of DEEN in learning the energy, the score function, and in single-step denoising experiments for synthetic and high-dimensional data.
no code implementations • 21 May 2017 • Arash Mehrjou, Bernhard Schölkopf, Saeed Saremi
We introduce a novel framework for adversarial training where the target distribution is annealed between the uniform distribution and the data distribution.
no code implementations • 27 Oct 2015 • Saeed Saremi, Terrence J. Sejnowski
We turn this representation into a directed probabilistic graphical model, transforming the learning problem into the unsupervised learning of the distribution of the critical bitplane and the supervised learning of the conditional distributions for the remaining bitplanes.