no code implementations • 30 Dec 2023 • Emanuele Sansone, Robin Manhaeve
Self-supervised learning is a popular and powerful method for utilizing large amounts of unlabeled data, for which a wide variety of training objectives have been proposed in the literature.
no code implementations • 27 Sep 2023 • Emanuele Sansone
We present a novel objective function for cluster-based self-supervised learning (SSL) that is designed to circumvent the triad of failure modes, namely representation collapse, cluster collapse, and the problem of invariance to permutations of cluster assignments.
no code implementations • 22 Apr 2023 • Emanuele Sansone, Robin Manhaeve
We introduce GEDI, a Bayesian framework that combines existing self-supervised learning objectives with likelihood-based generative models.
no code implementations • 27 Dec 2022 • Emanuele Sansone, Robin Manhaeve
Our analysis suggests a simple method for integrating self-supervised learning with generative models, allowing for the joint training of these two seemingly distinct approaches.
1 code implementation • 7 Feb 2022 • Eleonora Misino, Giuseppe Marra, Emanuele Sansone
To the best of our knowledge, this work is the first to propose a general-purpose end-to-end framework integrating probabilistic logic programming into a deep generative model.
1 code implementation • NeurIPS 2021 • Emanuele Sansone
We present the Local Self-Balancing sampler (LSB), a local Markov Chain Monte Carlo (MCMC) method for sampling in purely discrete domains, which is able to autonomously adapt to the target distribution and to reduce the number of target evaluations required to converge.
no code implementations • 30 Jun 2021 • Emanuele Sansone
This work considers the problem of learning structured representations from raw images using self-supervised learning.
no code implementations • 10 Feb 2018 • Emanuele Sansone, Hafiz Tiomoko Ali, Sun Jiacheng
Learning the true density in high-dimensional feature spaces is a well-known problem in machine learning.
no code implementations • 3 Oct 2017 • Emanuele Sansone, Francesco G. B. De Natale
Training feedforward neural networks with standard logistic activations is considered difficult because of the intrinsic properties of these sigmoidal functions.
1 code implementation • 24 Aug 2016 • Emanuele Sansone, Francesco G. B. De Natale, Zhi-Hua Zhou
Positive unlabeled (PU) learning is useful in various practical situations, where there is a need to learn a classifier for a class of interest from an unlabeled data set, which may contain anomalies as well as samples from unknown classes.