You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 15 Nov 2022 • Sékou-Oumar Kaba, Siamak Ravanbakhsh

Supervised learning with deep models has tremendous potential for applications in materials science.

no code implementations • 11 Nov 2022 • Sékou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, Siamak Ravanbakhsh

Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations.

no code implementations • 27 Jun 2022 • Mehran Shakerinava, Siamak Ravanbakhsh

A yet stronger constraint simplifies the utility function for goal-seeking agents in the form of a difference in some function of states that we call potential functions.

1 code implementation • 25 Mar 2022 • Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh

While (message-passing) graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs.

no code implementations • 19 Feb 2022 • Mehran Shakerinava, Arnab Kumar Mondal, Siamak Ravanbakhsh

We present a simple non-generative approach to deep representation learning that seeks equivariant deep embedding through simple objectives.

no code implementations • 29 Sep 2021 • Daniel Levy, Siamak Ravanbakhsh

Many real-world datasets include multiple distinct types of entities and relations, and so they are naturally best represented by heterogeneous graphs.

no code implementations • 29 Sep 2021 • Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, Siamak Ravanbakhsh

We study different notions of equivariance as an inductive bias in Reinforcement Learning (RL) and propose new mechanisms for recovering representations that are equivariant to both an agent’s action, and symmetry transformations of the state-action pairs.

no code implementations • 12 Jun 2021 • Mehran Shakerinava, Siamak Ravanbakhsh

We show how to model this interplay using ideas from group theory, identify the equivariant linear maps, and introduce equivariant padding that respects these symmetries.

no code implementations • NeurIPS 2020 • Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh

While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking.

no code implementations • 5 Jun 2020 • Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh

While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking.

no code implementations • ICML 2020 • Siamak Ravanbakhsh

Group invariant and equivariant Multilayer Perceptrons (MLP), also known as Equivariant Networks, have achieved remarkable success in learning on a variety of data structures, such as sequences, images, sets, and graphs.

no code implementations • ICML 2020 • Marjan Albooyeh, Daniele Bertolini, Siamak Ravanbakhsh

Sparse incidence tensors can represent a variety of structured data.

1 code implementation • 21 Mar 2019 • Devon Graham, Junhao Wang, Siamak Ravanbakhsh

In this paper, we propose the Equivariant Entity-Relationship Network (EERN), which is a Multilayer Perceptron equivariant to the symmetry transformations of the Entity-Relationship model.

no code implementations • 7 Dec 2018 • Bahare Fatemi, Siamak Ravanbakhsh, David Poole

Knowledge graphs are used to represent relational information in terms of triples.

1 code implementation • 15 Nov 2018 • Siyu He, Yin Li, Yu Feng, Shirley Ho, Siamak Ravanbakhsh, Wei Chen, Barnabás Póczos

We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory.

1 code implementation • 28 Jun 2018 • Sumedha Singla, Mingming Gong, Siamak Ravanbakhsh, Frank Sciurba, Barnabas Poczos, Kayhan N. Batmanghelich

Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features.

1 code implementation • ICML 2018 • Jason Hartford, Devon R Graham, Kevin Leyton-Brown, Siamak Ravanbakhsh

In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e. g., new users and new movies drawn from the same distribution used for training) and even across domains (e. g., predicting music ratings after training on movies).

Ranked #3 on Recommendation Systems on YahooMusic Monti

no code implementations • NeurIPS 2017 • Christopher Srinivasa, Inmar Givoni, Siamak Ravanbakhsh, Brendan J. Frey

We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs.

no code implementations • 6 Nov 2017 • Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C. Price, Shirley Ho, Jeff Schneider, Barnabas Poczos

A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe.

2 code implementations • NeurIPS 2017 • Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola

Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong.

1 code implementation • 8 Mar 2017 • Francois Lanusse, Quanbin Ma, Nan Li, Thomas E. Collett, Chun-Liang Li, Siamak Ravanbakhsh, Rachel Mandelbaum, Barnabas Poczos

We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1. 4" and S/N larger than 20 on individual $g$-band LSST exposures.

Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies

1 code implementation • ICML 2017 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos

We propose to study equivariance in deep neural networks through parameter symmetries.

no code implementations • 14 Nov 2016 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos

We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set.

no code implementations • 11 Nov 2016 • Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos

Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units.

no code implementations • 19 Sep 2016 • Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, Barnabas Poczos

To this end, we study the application of deep conditional generative models in generating realistic galaxy images.

no code implementations • 1 Jan 2016 • Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner

We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise.

no code implementations • NeurIPS 2015 • Farzaneh Mirzazadeh, Siamak Ravanbakhsh, Nan Ding, Dale Schuurmans

A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming.

no code implementations • 28 Sep 2015 • Siamak Ravanbakhsh, Barnabas Poczos, Russell Greiner

Boolean matrix factorization and Boolean matrix completion from noisy observations are desirable unsupervised data-analysis methods due to their interpretability, but hard to perform due to their NP-hardness.

no code implementations • 20 Aug 2015 • Siamak Ravanbakhsh

We contribute to three classes of approximations that improve BP for loopy graphs A) loop correction techniques; B) survey propagation, another message passing technique that surpasses BP in some settings; and C) hybrid methods that interpolate between deterministic message passing and Markov Chain Monte Carlo inference.

no code implementations • 25 Sep 2014 • Siamak Ravanbakhsh, Russell Greiner

This paper studies the form and complexity of inference in graphical models using the abstraction offered by algebraic structures.

no code implementations • 4 Sep 2014 • Siamak Ravanbakhsh, Philip Liu, Trent Bjorndahl, Rupasri Mandal, Jason R. Grant, Michael Wilson, Roman Eisner, Igor Sinelnikov, Xiaoyu Hu, Claudio Luchinat, Russell Greiner, David S. Wishart

This information can be extracted from a biofluid's NMR spectrum.

no code implementations • NeurIPS 2014 • Siamak Ravanbakhsh, Reihaneh Rabbany, Russell Greiner

The cutting plane method is an augmentative constrained optimization procedure that is often used with continuous-domain optimization techniques such as linear and convex programs.

no code implementations • 6 May 2014 • Siamak Ravanbakhsh, Russell Greiner, Brendan Frey

During the learning, to produce a sample from the current model, we start from a training data and descend in the energy landscape of the "perturbed model", for a fixed number of steps, or until a local optima is reached.

no code implementations • 26 Jan 2014 • Siamak Ravanbakhsh, Russell Greiner

We introduce an efficient message passing scheme for solving Constraint Satisfaction Problems (CSPs), which uses stochastic perturbation of Belief Propagation (BP) and Survey Propagation (SP) messages to bypass decimation and directly produce a single satisfying assignment.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.