Search Results for author: Emanuele Sansone

Found 10 papers, 3 papers with code

A Bayesian Unification of Self-Supervised Clustering and Energy-Based Models

no code implementations30 Dec 2023 Emanuele Sansone, Robin Manhaeve

Self-supervised learning is a popular and powerful method for utilizing large amounts of unlabeled data, for which a wide variety of training objectives have been proposed in the literature.

Clustering Out-of-Distribution Detection +1

The Triad of Failure Modes and a Possible Way Out

no code implementations27 Sep 2023 Emanuele Sansone

We present a novel objective function for cluster-based self-supervised learning (SSL) that is designed to circumvent the triad of failure modes, namely representation collapse, cluster collapse, and the problem of invariance to permutations of cluster assignments.

Self-Supervised Learning

Learning Symbolic Representations Through Joint GEnerative and DIscriminative Training

no code implementations22 Apr 2023 Emanuele Sansone, Robin Manhaeve

We introduce GEDI, a Bayesian framework that combines existing self-supervised learning objectives with likelihood-based generative models.

Clustering Self-Supervised Learning

GEDI: GEnerative and DIscriminative Training for Self-Supervised Learning

no code implementations27 Dec 2022 Emanuele Sansone, Robin Manhaeve

Our analysis suggests a simple method for integrating self-supervised learning with generative models, allowing for the joint training of these two seemingly distinct approaches.

Clustering Self-Supervised Learning

VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming

1 code implementation7 Feb 2022 Eleonora Misino, Giuseppe Marra, Emanuele Sansone

To the best of our knowledge, this work is the first to propose a general-purpose end-to-end framework integrating probabilistic logic programming into a deep generative model.

Logical Reasoning

LSB: Local Self-Balancing MCMC in Discrete Spaces

1 code implementation NeurIPS 2021 Emanuele Sansone

We present the Local Self-Balancing sampler (LSB), a local Markov Chain Monte Carlo (MCMC) method for sampling in purely discrete domains, which is able to autonomously adapt to the target distribution and to reduce the number of target evaluations required to converge.

Leveraging Hidden Structure in Self-Supervised Learning

no code implementations30 Jun 2021 Emanuele Sansone

This work considers the problem of learning structured representations from raw images using self-supervised learning.

Self-Supervised Learning

Coulomb Autoencoders

no code implementations10 Feb 2018 Emanuele Sansone, Hafiz Tiomoko Ali, Sun Jiacheng

Learning the true density in high-dimensional feature spaces is a well-known problem in machine learning.

Training Feedforward Neural Networks with Standard Logistic Activations is Feasible

no code implementations3 Oct 2017 Emanuele Sansone, Francesco G. B. De Natale

Training feedforward neural networks with standard logistic activations is considered difficult because of the intrinsic properties of these sigmoidal functions.

Efficient Training for Positive Unlabeled Learning

1 code implementation24 Aug 2016 Emanuele Sansone, Francesco G. B. De Natale, Zhi-Hua Zhou

Positive unlabeled (PU) learning is useful in various practical situations, where there is a need to learn a classifier for a class of interest from an unlabeled data set, which may contain anomalies as well as samples from unknown classes.

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.