Search Results for author: Pablo Piantanida

Found 38 papers, 16 papers with code

Learning Disentangled Textual Representations via Statistical Measures of Similarity

no code implementations ACL 2022 Pierre Colombo, Guillaume Staerman, Nathan Noiry, Pablo Piantanida

When working with textual data, a natural application of disentangled representations is fair classification where the goal is to make predictions without being biased (or influenced) by sensitive attributes that may be present in the data (e. g., age, gender or race).

Realistic Evaluation of Transductive Few-Shot Learning

1 code implementation NeurIPS 2021 Olivier Veilleux, Malik Boudiaf, Pablo Piantanida, Ismail Ben Ayed

Transductive inference is widely used in few-shot learning, as it leverages the statistics of the unlabeled query set of a few-shot task, typically yielding substantially better performances than its inductive counterpart.

Few-Shot Learning

Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning

1 code implementation30 Mar 2022 Georg Pichler, Marco Romanelli, Leonardo Rey Vega, Pablo Piantanida

Federated Learning is expected to provide strong privacy guarantees, as only gradients or model parameters but no plain text training data is ever exchanged either between the clients or between the clients and the central server.

Federated Learning Inference Attack +1

Leveraging Adversarial Examples to Quantify Membership Information Leakage

1 code implementation17 Mar 2022 Ganesh Del Grosso, Hamid Jalalzai, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida

The use of personal data for training machine learning systems comes with a privacy threat and measuring the level of privacy of a model is one of the major challenges in machine learning today.

KNIFE: Kernelized-Neural Differential Entropy Estimation

1 code implementation14 Feb 2022 Georg Pichler, Pierre Colombo, Malik Boudiaf, Gunther Koliander, Pablo Piantanida

Mutual Information (MI) has been widely used as a loss regularizer for training neural networks.

Domain Adaptation

PACMAN: PAC-style bounds accounting for the Mismatch between Accuracy and Negative log-loss

no code implementations10 Dec 2021 Matias Vera, Leonardo Rey Vega, Pablo Piantanida

In this work, we introduce an analysis based on point-wise PAC approach over the generalization gap considering the mismatch of testing based on the accuracy metric and training on the negative log-loss.

InfoLM: A New Metric to Evaluate Summarization & Data2Text Generation

1 code implementation2 Dec 2021 Pierre Colombo, Chloe Clavel, Pablo Piantanida

In this paper, we introduce InfoLM a family of untrained metrics that can be viewed as a string-based metric that addresses the aforementioned flaws thanks to a pre-trained masked language model.

Language Modelling Text Generation

Automatic Text Evaluation through the Lens of Wasserstein Barycenters

2 code implementations EMNLP 2021 Pierre Colombo, Guillaume Staerman, Chloe Clavel, Pablo Piantanida

A new metric \texttt{BaryScore} to evaluate text generation based on deep contextualized embeddings e. g., BERT, Roberta, ELMo) is introduced.

Image Captioning Machine Translation +3

Learning Sparse Privacy-Preserving Representations for Smart Meters Data

no code implementations17 Jul 2021 Mohammadhadi Shateri, Francisco Messina, Pablo Piantanida, Fabrice Labeau

We formulate this as the problem of learning a sparse representation of SMs data with minimum information leakage and maximum utility.

Fault Detection Load Forecasting

On the impossibility of non-trivial accuracy under fairness constraints

no code implementations14 Jul 2021 Carlos Pinzón, Catuscia Palamidessi, Pablo Piantanida, Frank Valencia

One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy.

Fairness

Mutual-Information Based Few-Shot Classification

2 code implementations23 Jun 2021 Malik Boudiaf, Ziko Imtiaz Masud, Jérôme Rony, Jose Dolz, Ismail Ben Ayed, Pablo Piantanida

We motivate our transductive loss by deriving a formal relation between the classification accuracy and mutual-information maximization.

Classification Few-Shot Learning

Adversarial Robustness via Fisher-Rao Regularization

1 code implementation12 Jun 2021 Marine Picot, Francisco Messina, Malik Boudiaf, Fabrice Labeau, Ismail Ben Ayed, Pablo Piantanida

Adversarial robustness has become a topic of growing interest in machine learning since it was observed that neural networks tend to be brittle.

Adversarial Defense Adversarial Robustness

DOCTOR: A Simple Method for Detecting Misclassification Errors

1 code implementation NeurIPS 2021 Federica Granese, Marco Romanelli, Daniele Gorla, Catuscia Palamidessi, Pablo Piantanida

Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as "black boxes".

Object Recognition Sentiment Analysis

Bounding Information Leakage in Machine Learning

no code implementations9 May 2021 Ganesh Del Grosso, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida

Machine Learning services are being deployed in a large range of applications that make it easy for an adversary, using the algorithm and/or the model, to gain access to sensitive data.

Inference Attack Membership Inference Attack

Learning to Disentangle Textual Representations and Attributes via Mutual Information

no code implementations1 Jan 2021 Pierre Colombo, Chloé Clavel, Pablo Piantanida

Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification (\textit{e. g.} building classifiers whose decisions cannot disproportionately hurt or benefit specific groups identified by sensitive attributes), style transfer and sentence generation, among others.

Disentanglement Style Transfer

Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need?

2 code implementations CVPR 2021 Malik Boudiaf, Hoel Kervadec, Ziko Imtiaz Masud, Pablo Piantanida, Ismail Ben Ayed, Jose Dolz

We show that the way inference is performed in few-shot segmentation tasks has a substantial effect on performances -- an aspect often overlooked in the literature in favor of the meta-learning paradigm.

Few-Shot Semantic Segmentation

Privacy-Preserving Synthetic Smart Meters Data

no code implementations6 Dec 2020 Ganesh Del Grosso, Georg Pichler, Pablo Piantanida

However, the use of power consumption data raises significant privacy concerns, as this data usually belongs to clients of a power company.

Deep Directed Information-Based Learning for Privacy-Preserving Smart Meter Data Release

no code implementations20 Nov 2020 Mohammadhadi Shateri, Francisco Messina, Pablo Piantanida, Fabrice Labeau

In this paper, we study this problem in the context of time series data and smart meters (SMs) power consumption measurements in particular.

Time Series

The Role of Mutual Information in Variational Classifiers

no code implementations22 Oct 2020 Matias Vera, Leonardo Rey Vega, Pablo Piantanida

In practice, this behaviour is controlled by various--sometimes heuristics--regularization techniques, which are motivated by developing upper bounds to the generalization error.

Variational Inference

On the Impact of Side Information on Smart Meter Privacy-Preserving Methods

no code implementations29 Jun 2020 Mohammadhadi Shateri, Francisco Messina, Pablo Piantanida, Fabrice Labeau

On the one hand, the releaser in the CAL method, by getting supervision from the actual values of the private variables and feedback from the adversary performance, tries to minimize the adversary log-likelihood.

Privacy-Cost Management in Smart Meters with Mutual Information-Based Reinforcement Learning

no code implementations10 Jun 2020 Mohammadhadi Shateri, Francisco Messina, Pablo Piantanida, Fabrice Labeau

Unlike previous studies, we model the whole temporal correlation in the data to learn the MI in its general form and use a neural network to estimate the MI-based reward signal to guide the PCMU learning process.

Q-Learning reinforcement-learning

Estimating g-Leakage via Machine Learning

1 code implementation9 May 2020 Marco Romanelli, Konstantinos Chatzikokolakis, Catuscia Palamidessi, Pablo Piantanida

A feature of our approach is that it does not require to estimate the conditional probabilities, and that it is suitable for a large class of ML algorithms.

A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses

1 code implementation ECCV 2020 Malik Boudiaf, Jérôme Rony, Imtiaz Masud Ziko, Eric Granger, Marco Pedersoli, Pablo Piantanida, Ismail Ben Ayed

Second, we show that, more generally, minimizing the cross-entropy is actually equivalent to maximizing the mutual information, to which we connect several well-known pairwise losses.

Ranked #7 on Metric Learning on In-Shop (using extra training data)

Metric Learning

Privacy-Cost Management in Smart Meters Using Deep Reinforcement Learning

no code implementations10 Mar 2020 Mohammadhadi Shateri, Francisco Messina, Pablo Piantanida, Fabrice Labeau

Smart meters (SMs) play a pivotal rule in the smart grid by being able to report the electricity usage of consumers to the utility provider (UP) almost in real-time.

Q-Learning reinforcement-learning

On the Estimation of Information Measures of Continuous Distributions

no code implementations7 Feb 2020 Georg Pichler, Pablo Piantanida, Günther Koliander

In particular, we provide confidence bounds for simple histogram based estimation of differential entropy from a fixed number of samples, assuming that the probability density function is Lipschitz continuous with known Lipschitz constant and known, bounded support.

Real-Time Privacy-Preserving Data Release for Smart Meters

no code implementations14 Jun 2019 Mohammadhadi Shateri, Francisco Messina, Pablo Piantanida, Fabrice Labeau

In this paper, we focus on real-time privacy threats, i. e., potential attackers that try to infer sensitive information from SMs data in an online fashion.

Time Series

Understanding the Behaviour of the Empirical Cross-Entropy Beyond the Training Distribution

no code implementations28 May 2019 Matias Vera, Pablo Piantanida, Leonardo Rey Vega

Our main result is that the testing gap between the empirical cross-entropy and its statistical expectation (measured with respect to the testing probability law) can be bounded with high probability by the mutual information between the input testing samples and the corresponding representations, generated by the encoder obtained at training time.

Learning Theory

Learning Anonymized Representations with Adversarial Neural Networks

1 code implementation26 Feb 2018 Clément Feutry, Pablo Piantanida, Yoshua Bengio, Pierre Duhamel

Statistical methods protecting sensitive information or the identity of the data owner have become critical to ensure privacy of individuals as well as of organizations.

Representation Learning Sentiment Analysis

The Role of Information Complexity and Randomization in Representation Learning

no code implementations14 Feb 2018 Matías Vera, Pablo Piantanida, Leonardo Rey Vega

This paper presents a sample-dependent bound on the generalization gap of the cross-entropy loss that scales with the information complexity (IC) of the representations, meaning the mutual information between inputs and their representations.

Representation Learning

Compression-Based Regularization with an Application to Multi-Task Learning

no code implementations19 Nov 2017 Matías Vera, Leonardo Rey Vega, Pablo Piantanida

This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i. e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels).

Multi-Task Learning Text Categorization

The Multi-layer Information Bottleneck Problem

no code implementations14 Nov 2017 Qianqian Yang, Pablo Piantanida, Deniz Gündüz

Based on information forwarded by the preceding layer, each stage of the network is required to preserve a certain level of relevance with regards to a specific hidden variable, quantified by the mutual information.

Collaborative Information Bottleneck

no code implementations5 Apr 2016 Matías Vera, Leonardo Rey Vega, Pablo Piantanida

On the other hand, in CDIB there are two cooperating encoders which separately observe $X_1$ and $X_2$ and a third node which can listen to the exchanges between the two encoders in order to obtain information about a hidden variable $Y$.

Distributed Information-Theoretic Clustering

no code implementations15 Feb 2016 Georg Pichler, Pablo Piantanida, Gerald Matz

We study a novel multi-terminal source coding setup motivated by the biclustering problem.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.