Search Results for author: Michael Figurnov

Found 9 papers, 7 papers with code

Monte Carlo Gradient Estimation in Machine Learning

2 code implementations25 Jun 2019 Shakir Mohamed, Mihaela Rosca, Michael Figurnov, andriy mnih

This paper is a broad and accessible survey of the methods we have at our disposal for Monte Carlo gradient estimation in machine learning and across the statistical sciences: the problem of computing the gradient of an expectation of a function with respect to parameters defining the distribution that is integrated; the problem of sensitivity analysis.

BIG-bench Machine Learning

Variational Autoencoder with Arbitrary Conditioning

3 code implementations ICLR 2019 Oleg Ivanov, Michael Figurnov, Dmitry Vetrov

We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot".

Diversity Image Inpainting +1

Implicit Reparameterization Gradients

1 code implementation NeurIPS 2018 Michael Figurnov, Shakir Mohamed, andriy mnih

By providing a simple and efficient way of computing low-variance gradients of continuous random variables, the reparameterization trick has become the technique of choice for training a variety of latent variable models.

Tensor Train decomposition on TensorFlow (T3F)

2 code implementations5 Jan 2018 Alexander Novikov, Pavel Izmailov, Valentin Khrulkov, Michael Figurnov, Ivan Oseledets

Tensor Train decomposition is used across many branches of machine learning.

Mathematical Software Numerical Analysis

Probabilistic Adaptive Computation Time

no code implementations1 Dec 2017 Michael Figurnov, Artem Sobolev, Dmitry Vetrov

We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs.

Spatially Adaptive Computation Time for Residual Networks

1 code implementation CVPR 2017 Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, Ruslan Salakhutdinov

This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image.

Classification Computational Efficiency +7

Robust Variational Inference

no code implementations28 Nov 2016 Michael Figurnov, Kirill Struminsky, Dmitry Vetrov

Variational inference is a powerful tool for approximate inference.

Variational Inference

PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions

2 code implementations NeurIPS 2016 Michael Figurnov, Aijan Ibraimova, Dmitry Vetrov, Pushmeet Kohli

We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones.

Cannot find the paper you are looking for? You can Submit a new open access paper.