Search Results for author: Siamak Ravanbakhsh

Found 43 papers, 12 papers with code

Iterated Denoising Energy Matching for Sampling from Boltzmann Densities

1 code implementation9 Feb 2024 Tara Akhound-Sadegh, Jarrid Rector-Brooks, Avishek Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong

Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-body systems, is a foundational problem in science.

Denoising Efficient Exploration

E(3)-Equivariant Mesh Neural Networks

1 code implementation7 Feb 2024 Thuan Trang, Nhat Khang Ngo, Daniel Levy, Thieu N. Vo, Siamak Ravanbakhsh, Truong Son Hy

Triangular meshes are widely used to represent three-dimensional objects.

Symmetry Breaking and Equivariant Neural Networks

no code implementations14 Dec 2023 Sékou-Oumar Kaba, Siamak Ravanbakhsh

Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design.

Combinatorial Optimization Graph Representation Learning +1

Lie Point Symmetry and Physics Informed Networks

no code implementations7 Nov 2023 Tara Akhound-Sadegh, Laurence Perreault-Levasseur, Johannes Brandstetter, Max Welling, Siamak Ravanbakhsh

Symmetries have been leveraged to improve the generalization of neural networks through different mechanisms from data augmentation to equivariant architectures.

Data Augmentation Inductive Bias

Learning to Reach Goals via Diffusion

no code implementations4 Oct 2023 Vineet Jain, Siamak Ravanbakhsh

We present a novel perspective on goal-conditioned reinforcement learning by framing it within the context of denoising diffusion models.

Computational Efficiency Decision Making +2

Using Multiple Vector Channels Improves E(n)-Equivariant Graph Neural Networks

no code implementations6 Sep 2023 Daniel Levy, Sékou-Oumar Kaba, Carmelo Gonzales, Santiago Miret, Siamak Ravanbakhsh

We present a natural extension to E(n)-equivariant graph neural networks that uses multiple equivariant vectors per node.

Efficient Dynamics Modeling in Interactive Environments with Koopman Theory

no code implementations20 Jun 2023 Arnab Kumar Mondal, Siba Smarak Panigrahi, Sai Rajeswar, Kaleem Siddiqi, Siamak Ravanbakhsh

We approach this problem from the lens of Koopman theory, where the nonlinear dynamics of the environment can be linearized in a high-dimensional latent space.

Reinforcement Learning (RL)

On Diffusion Modeling for Anomaly Detection

1 code implementation29 May 2023 Victor Livernoche, Vineet Jain, Yashar Hezaveh, Siamak Ravanbakhsh

By simplifying DDPM in application to anomaly detection, we are naturally led to an alternative approach called Diffusion Time Estimation (DTE).

Denoising Semi-supervised Anomaly Detection +1

Equivariant Networks for Crystal Structures

no code implementations15 Nov 2022 Sékou-Oumar Kaba, Siamak Ravanbakhsh

Supervised learning with deep models has tremendous potential for applications in materials science.

Property Prediction

Equivariance with Learned Canonicalization Functions

no code implementations11 Nov 2022 Sékou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, Siamak Ravanbakhsh

Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations.

Image Classification Point Cloud Classification

Utility Theory for Sequential Decision Making

no code implementations27 Jun 2022 Mehran Shakerinava, Siamak Ravanbakhsh

A yet stronger constraint simplifies the utility function for goal-seeking agents in the form of a difference in some function of states that we call potential functions.

Decision Making

SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks

1 code implementation25 Mar 2022 Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh

While (message-passing) graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs.

Transformation Coding: Simple Objectives for Equivariant Representations

no code implementations19 Feb 2022 Mehran Shakerinava, Arnab Kumar Mondal, Siamak Ravanbakhsh

We present a simple non-generative approach to deep representation learning that seeks equivariant deep embedding through simple objectives.

Disentanglement reinforcement-learning +1

EqR: Equivariant Representations for Data-Efficient Reinforcement Learning

no code implementations29 Sep 2021 Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, Siamak Ravanbakhsh

We study different notions of equivariance as an inductive bias in Reinforcement Learning (RL) and propose new mechanisms for recovering representations that are equivariant to both an agent’s action, and symmetry transformations of the state-action pairs.

Atari Games Inductive Bias +2

Equivariant Heterogeneous Graph Networks

no code implementations29 Sep 2021 Daniel Levy, Siamak Ravanbakhsh

Many real-world datasets include multiple distinct types of entities and relations, and so they are naturally best represented by heterogeneous graphs.

Link Prediction Node Classification

Equivariant Networks for Pixelized Spheres

no code implementations12 Jun 2021 Mehran Shakerinava, Siamak Ravanbakhsh

We show how to model this interplay using ideas from group theory, identify the equivariant linear maps, and introduce equivariant padding that respects these symmetries.

Semantic Segmentation

Equivariant Networks for Hierarchical Structures

no code implementations NeurIPS 2020 Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh

While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking.

Semantic Segmentation Translation

Equivariant Maps for Hierarchical Structures

no code implementations5 Jun 2020 Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh

While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking.

3D Semantic Segmentation Translation

Universal Equivariant Multilayer Perceptrons

no code implementations ICML 2020 Siamak Ravanbakhsh

Group invariant and equivariant Multilayer Perceptrons (MLP), also known as Equivariant Networks, have achieved remarkable success in learning on a variety of data structures, such as sequences, images, sets, and graphs.

Equivariant Entity-Relationship Networks

1 code implementation21 Mar 2019 Devon Graham, Junhao Wang, Siamak Ravanbakhsh

In this paper, we propose the Equivariant Entity-Relationship Network (EERN), which is a Multilayer Perceptron equivariant to the symmetry transformations of the Entity-Relationship model.

Knowledge Graphs

Learning to Predict the Cosmological Structure Formation

1 code implementation15 Nov 2018 Siyu He, Yin Li, Yu Feng, Shirley Ho, Siamak Ravanbakhsh, Wei Chen, Barnabás Póczos

We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory.

Subject2Vec: Generative-Discriminative Approach from a Set of Image Patches to a Vector

1 code implementation28 Jun 2018 Sumedha Singla, Mingming Gong, Siamak Ravanbakhsh, Frank Sciurba, Barnabas Poczos, Kayhan N. Batmanghelich

Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features.

Anatomy

Deep Models of Interactions Across Sets

1 code implementation ICML 2018 Jason Hartford, Devon R Graham, Kevin Leyton-Brown, Siamak Ravanbakhsh

In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e. g., new users and new movies drawn from the same distribution used for training) and even across domains (e. g., predicting music ratings after training on movies).

Collaborative Filtering Matrix Completion +2

Min-Max Propagation

no code implementations NeurIPS 2017 Christopher Srinivasa, Inmar Givoni, Siamak Ravanbakhsh, Brendan J. Frey

We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs.

Estimating Cosmological Parameters from the Dark Matter Distribution

no code implementations6 Nov 2017 Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C. Price, Shirley Ho, Jeff Schneider, Barnabas Poczos

A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe.

Deep Sets

5 code implementations NeurIPS 2017 Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola

Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong.

Anomaly Detection Outlier Detection +1

CMU DeepLens: Deep Learning For Automatic Image-based Galaxy-Galaxy Strong Lens Finding

1 code implementation8 Mar 2017 Francois Lanusse, Quanbin Ma, Nan Li, Thomas E. Collett, Chun-Liang Li, Siamak Ravanbakhsh, Rachel Mandelbaum, Barnabas Poczos

We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1. 4" and S/N larger than 20 on individual $g$-band LSST exposures.

Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies

Equivariance Through Parameter-Sharing

1 code implementation ICML 2017 Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos

We propose to study equivariance in deep neural networks through parameter symmetries.

Deep Learning with Sets and Point Clouds

no code implementations14 Nov 2016 Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos

We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set.

Clustering General Classification +1

Annealing Gaussian into ReLU: a New Sampling Strategy for Leaky-ReLU RBM

no code implementations11 Nov 2016 Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos

Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units.

Enabling Dark Energy Science with Deep Generative Models of Galaxy Images

no code implementations19 Sep 2016 Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, Barnabas Poczos

To this end, we study the application of deep conditional generative models in generating realistic galaxy images.

Stochastic Neural Networks with Monotonic Activation Functions

no code implementations1 Jan 2016 Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner

We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise.

Embedding Inference for Structured Multilabel Prediction

no code implementations NeurIPS 2015 Farzaneh Mirzazadeh, Siamak Ravanbakhsh, Nan Ding, Dale Schuurmans

A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming.

Boolean Matrix Factorization and Noisy Completion via Message Passing

no code implementations28 Sep 2015 Siamak Ravanbakhsh, Barnabas Poczos, Russell Greiner

Boolean matrix factorization and Boolean matrix completion from noisy observations are desirable unsupervised data-analysis methods due to their interpretability, but hard to perform due to their NP-hardness.

Collaborative Filtering Matrix Completion

Message Passing and Combinatorial Optimization

no code implementations20 Aug 2015 Siamak Ravanbakhsh

We contribute to three classes of approximations that improve BP for loopy graphs A) loop correction techniques; B) survey propagation, another message passing technique that surpasses BP in some settings; and C) hybrid methods that interpolate between deterministic message passing and Markov Chain Monte Carlo inference.

Clustering Combinatorial Optimization +1

Revisiting Algebra and Complexity of Inference in Graphical Models

no code implementations25 Sep 2014 Siamak Ravanbakhsh, Russell Greiner

This paper studies the form and complexity of inference in graphical models using the abstraction offered by algebraic structures.

Augmentative Message Passing for Traveling Salesman Problem and Graph Partitioning

no code implementations NeurIPS 2014 Siamak Ravanbakhsh, Reihaneh Rabbany, Russell Greiner

The cutting plane method is an augmentative constrained optimization procedure that is often used with continuous-domain optimization techniques such as linear and convex programs.

graph partitioning Traveling Salesman Problem

Training Restricted Boltzmann Machine by Perturbation

no code implementations6 May 2014 Siamak Ravanbakhsh, Russell Greiner, Brendan Frey

During the learning, to produce a sample from the current model, we start from a training data and descend in the energy landscape of the "perturbed model", for a fixed number of steps, or until a local optima is reached.

Perturbed Message Passing for Constraint Satisfaction Problems

no code implementations26 Jan 2014 Siamak Ravanbakhsh, Russell Greiner

We introduce an efficient message passing scheme for solving Constraint Satisfaction Problems (CSPs), which uses stochastic perturbation of Belief Propagation (BP) and Survey Propagation (SP) messages to bypass decimation and directly produce a single satisfying assignment.

Cannot find the paper you are looking for? You can Submit a new open access paper.