In this work, we propose a novel Gradient-based Intervention Targeting method, abbreviated GIT, that 'trusts' the gradient estimator of a gradient-based causal discovery framework to provide signals for the intervention acquisition function.
This makes the GP posterior locally non-Gaussian, therefore we name our method Non-Gaussian Gaussian Processes (NGGPs).
One of the main arguments behind studying disentangled representations is the assumption that they can be easily reused in different tasks.
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a standard gradient-based training.
In order to perform plausible interpolations in the latent space of a generative model, we need a measure that credibly reflects if a point in an interpolation is close to the data manifold being modelled, i. e. if it is convincing.
Global pooling, such as max- or sum-pooling, is one of the key ingredients in deep neural networks used for processing images, texts, graphs and other types of structured data.
We construct a general unified framework for learning representation of structured data, i. e. data which cannot be represented as the fixed-length vectors (e. g. sets, graphs, texts or images of varying sizes).