We attempt to address that challenge by proposing a novel approach to the problem: Local Intrinsic Dimension estimation using approximate Likelihood (LIDL).
Many crucial problems in deep learning and statistics are caused by a variational gap, i. e., a difference between evidence and evidence lower bound (ELBO).
We introduce a new training paradigm that enforces interval constraints on neural network parameter space to control forgetting.
Few-shot models aim at making predictions using a minimal number of labeled examples from a given task.
We propose an effective regularization strategy (CW-TaLaR) for solving continual learning problems.
This makes the GP posterior locally non-Gaussian, therefore we name our method Non-Gaussian Gaussian Processes (NGGPs).
In this work, we leverage efficient processing operations that can be run in parallel on modern Graphical Processing Units (GPUs), predominant computing architecture used e. g. in deep learning, to reduce the computational burden of computing matrix decompositions.
We propose FlowSVDD -- a flow-based one-class classifier for anomaly/outliers detection that realizes a well-known SVDD principle using deep learning tools.
In such a case, we have to ``understand'' the object's composition and coloring scheme of each part.
This way, we can sample a mesh quad on that sphere and project it back onto the object's manifold.
In this work, we reformulate the problem of point cloud completion into an object hallucination task.
Predicting future states or actions of a given system remains a fundamental, yet unsolved challenge of intelligence, especially in the scope of complex and non-deterministic scenarios, such as modeling behavior of humans.
We investigate the problem of training neural networks from incomplete images without replacing missing values.
We propose OneFlow - a flow-based one-class classifier for anomaly (outlier) detection that finds a minimal volume bounding region.
We present a mechanism for detecting adversarial examples based on data representations taken from the hidden layers of the target network.
To that end, we devise a generative model that uses a hypernetwork to return the weights of a Continuous Normalizing Flows (CNF) target network.
In the paper we construct a fully convolutional GAN model: LocoGAN, which latent space is given by noise-like images of possibly different resolutions.
The main idea of our HyperCloud method is to build a hyper network that returns weights of a particular neural network (target network) trained to map points from a uniform unit ball distribution into a 3D shape.
Graph Convolutional Networks (GCNs) have recently become the primary choice for learning from graph-structured data, superseding hash fingerprints in representing chemical compounds.
We present an efficient technique, which allows to train classification networks which are verifiably robust against norm-bounded adversarial attacks.
Independent Component Analysis (ICA) - one of the basic tools in data analysis - aims to find a coordinate system in which the components of the data are independent.
Global pooling, such as max- or sum-pooling, is one of the key ingredients in deep neural networks used for processing images, texts, graphs and other types of structured data.
We construct a general unified framework for learning representation of structured data, i. e. data which cannot be represented as the fixed-length vectors (e. g. sets, graphs, texts or images of varying sizes).
The crucial new ingredient is the introduction of a new (Cramer-Wold) metric in the space of densities, which replaces the Wasserstein metric used in SWAE.
The R Package CEC performs clustering based on the cross-entropy clustering (CEC) method, which was recently developed with the use of information theory.