Search Results for author: Shakir Mohamed

Found 35 papers, 17 papers with code

Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence

no code implementations8 Jul 2020 Shakir Mohamed, Marie-Therese Png, William Isaac

By embedding a decolonial critical approach within its technical practice, AI communities can develop foresight and tactics that can better align research and technology development with established ethical principles, centring vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress.

A review of radar-based nowcasting of precipitation and applicable machine learning techniques

no code implementations11 May 2020 Rachel Prudden, Samantha Adams, Dmitry Kangin, Niall Robinson, Suman Ravuri, Shakir Mohamed, Alberto Arribas

A 'nowcast' is a type of weather forecast which makes predictions in the very short term, typically less than two hours - a period in which traditional numerical weather prediction can be limited.

Levels of Analysis for Machine Learning

no code implementations6 Apr 2020 Jessica Hamrick, Shakir Mohamed

As a remedy for this dilemma, we advocate for the adoption of a common conceptual framework which can be used to understand, analyze, and discuss research.

Normalizing Flows for Probabilistic Modeling and Inference

5 code implementations5 Dec 2019 George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan

In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference.

Monte Carlo Gradient Estimation in Machine Learning

2 code implementations25 Jun 2019 Shakir Mohamed, Mihaela Rosca, Michael Figurnov, andriy mnih

This paper is a broad and accessible survey of the methods we have at our disposal for Monte Carlo gradient estimation in machine learning and across the statistical sciences: the problem of computing the gradient of an expectation of a function with respect to parameters defining the distribution that is integrated; the problem of sensitivity analysis.

Training language GANs from Scratch

5 code implementations NeurIPS 2019 Cyprien de Masson d'Autume, Mihaela Rosca, Jack Rae, Shakir Mohamed

Generative Adversarial Networks (GANs) enjoy great success at image generation, but have proven difficult to train in the domain of natural language.

Image Generation Text Generation

Learning Implicit Generative Models with the Method of Learned Moments

1 code implementation ICML 2018 Suman Ravuri, Shakir Mohamed, Mihaela Rosca, Oriol Vinyals

We propose a method of moments (MoM) algorithm for training large-scale implicit generative models.

Implicit Reparameterization Gradients

1 code implementation NeurIPS 2018 Michael Figurnov, Shakir Mohamed, andriy mnih

By providing a simple and efficient way of computing low-variance gradients of continuous random variables, the reparameterization trick has become the technique of choice for training a variety of latent variable models.

Distribution Matching in Variational Inference

no code implementations19 Feb 2018 Mihaela Rosca, Balaji Lakshminarayanan, Shakir Mohamed

With the increasingly widespread deployment of generative models, there is a mounting need for a deeper understanding of their behaviors and limitations.

Variational Inference

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step

1 code implementation ICLR 2018 William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow

Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost.

Variational Approaches for Auto-Encoding Generative Adversarial Networks

5 code implementations15 Jun 2017 Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, Shakir Mohamed

In this paper, we develop a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model.

Variational Inference

The Cramer Distance as a Solution to Biased Wasserstein Gradients

1 code implementation ICLR 2018 Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, Rémi Munos

We show that the Cram\'er distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

6 code implementations ICLR 2017 Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do.

Disentanglement

Recurrent Environment Simulators

no code implementations7 Apr 2017 Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, Shakir Mohamed

Models that can simulate how environments change in response to actions can be used by agents to plan and act efficiently.

Atari Games Car Racing

Generative Temporal Models with Memory

no code implementations15 Feb 2017 Mevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir Mohamed, Danilo J. Rezende, David Amos, Timothy Lillicrap

We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations.

Variational Inference

Normalizing Flows on Riemannian Manifolds

no code implementations7 Nov 2016 Mevlana C. Gemici, Danilo Rezende, Shakir Mohamed

In spite of the multitude of algorithms available for density estimation in the Euclidean spaces $\mathbf{R}^n$ that scale to large n (e. g. normalizing flows, kernel methods and variational approximations), most of these methods are not immediately suitable for density estimation in more general Riemannian manifolds.

Density Estimation Protein Folding

Learning in Implicit Generative Models

no code implementations11 Oct 2016 Shakir Mohamed, Balaji Lakshminarayanan

We frame GANs within the wider landscape of algorithms for learning in implicit generative models--models that only specify a stochastic procedure with which to generate data--and relate these ideas to modelling problems in related fields, such as econometrics and approximate Bayesian computation.

Density Ratio Estimation Two-sample testing

Early Visual Concept Learning with Unsupervised Deep Learning

1 code implementation17 Jun 2016 Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research.

One-Shot Generalization in Deep Generative Models

no code implementations16 Mar 2016 Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, Daan Wierstra

In particular, humans have an ability for one-shot generalization: an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept.

Density Estimation Image Generation

Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning

2 code implementations NeurIPS 2015 Shakir Mohamed, Danilo Jimenez Rezende

The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents.

reinforcement-learning Variational Inference

Variational Inference with Normalizing Flows

14 code implementations21 May 2015 Danilo Jimenez Rezende, Shakir Mohamed

The choice of approximate posterior distribution is one of the core problems in variational inference.

Variational Inference

Semi-Supervised Learning with Deep Generative Models

14 code implementations NeurIPS 2014 Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling

The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis.

Bayesian Inference

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

5 code implementations16 Jan 2014 Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning.

Bayesian Inference

Expectation Propagation in Gaussian Process Dynamical Systems

no code implementations NeurIPS 2012 Marc Deisenroth, Shakir Mohamed

Rich and complex time-series data, such as those generated from engineering sys- tems, financial markets, videos or neural recordings are now a common feature of modern data analysis.

Time Series

Expectation Propagation in Gaussian Process Dynamical Systems: Extended Version

no code implementations NeurIPS 2012 Marc Peter Deisenroth, Shakir Mohamed

Rich and complex time-series data, such as those generated from engineering systems, financial markets, videos or neural recordings, are now a common feature of modern data analysis.

Time Series

Bayesian Exponential Family PCA

no code implementations NeurIPS 2008 Shakir Mohamed, Zoubin Ghahramani, Katherine A. Heller

Principal Components Analysis (PCA) has become established as one of the key tools for dimensionality reduction when dealing with real valued data.

Bayesian Inference Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.