Search Results for author: Leonardo Rey Vega

Found 13 papers, 2 papers with code

Invariant Representations in Deep Learning for Optoacoustic Imaging

no code implementations29 Apr 2023 Matias Vera, Martin G. Gonzalez, Leonardo Rey Vega

Image reconstruction in optoacoustic tomography (OAT) is a trending learning task highly dependent on measured physical magnitudes present at sensing time.

Image Reconstruction Out-of-Distribution Generalization

An Asymptotically Equivalent GLRT Test for Distributed Detection in Wireless Sensor Networks

no code implementations28 Apr 2023 Juan Augusto Maya, Leonardo Rey Vega, Andrea M. Tonello

Nevertheless, its asymptotic performance is proved to be identical to the original GLRT, showing that the statistically dependence of the measurements has no impact on the detection performance in the asymptotic scenario.

Cross-domain Sentiment Classification in Spanish

no code implementations15 Mar 2023 Lautaro Estienne, Matias Vera, Leonardo Rey Vega

In this work we perform a study on the ability of a classification system trained with a large database of product reviews to generalize to different Spanish domains.

Classification Sentiment Analysis +1

An Exponentially-Tight Approximate Factorization of the Joint PDF of Statistical Dependent Measurements in Wireless Sensor Networks

no code implementations9 Nov 2022 Juan Augusto Maya, Leonardo Rey Vega, Andrea M. Tonello

When the source is present, the computation of the joint PDF of the energy measurements at the nodes is a challenging problem.

Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning

1 code implementation30 Mar 2022 Georg Pichler, Marco Romanelli, Leonardo Rey Vega, Pablo Piantanida

Federated Learning is expected to provide strong privacy guarantees, as only gradients or model parameters but no plain text training data is ever exchanged either between the clients or between the clients and the central server.

Federated Learning Inference Attack +1

PACMAN: PAC-style bounds accounting for the Mismatch between Accuracy and Negative log-loss

no code implementations10 Dec 2021 Matias Vera, Leonardo Rey Vega, Pablo Piantanida

In this work, we introduce an analysis based on point-wise PAC approach over the generalization gap considering the mismatch of testing based on the accuracy metric and training on the negative log-loss.

The Role of Mutual Information in Variational Classifiers

no code implementations22 Oct 2020 Matias Vera, Leonardo Rey Vega, Pablo Piantanida

In practice, this behaviour is controlled by various--sometimes heuristics--regularization techniques, which are motivated by developing upper bounds to the generalization error.

Variational Inference

On fully-distributed composite tests with general parametric data distributions in sensor networks

no code implementations5 Mar 2020 Juan Maya, Leonardo Rey Vega

Interestingly, despite the fact that the L-MP is simpler and more efficient to implement than the GLR test, we obtain some conditions under which the L-MP has superior asymptotic performance to the GLR test.

Understanding the Behaviour of the Empirical Cross-Entropy Beyond the Training Distribution

no code implementations28 May 2019 Matias Vera, Pablo Piantanida, Leonardo Rey Vega

Our main result is that the testing gap between the empirical cross-entropy and its statistical expectation (measured with respect to the testing probability law) can be bounded with high probability by the mutual information between the input testing samples and the corresponding representations, generated by the encoder obtained at training time.

Learning Theory

The Role of Information Complexity and Randomization in Representation Learning

no code implementations14 Feb 2018 Matías Vera, Pablo Piantanida, Leonardo Rey Vega

This paper presents a sample-dependent bound on the generalization gap of the cross-entropy loss that scales with the information complexity (IC) of the representations, meaning the mutual information between inputs and their representations.

Representation Learning

Compression-Based Regularization with an Application to Multi-Task Learning

no code implementations19 Nov 2017 Matías Vera, Leonardo Rey Vega, Pablo Piantanida

This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i. e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels).

Multi-Task Learning Text Categorization

Collaborative Information Bottleneck

no code implementations5 Apr 2016 Matías Vera, Leonardo Rey Vega, Pablo Piantanida

On the other hand, in CDIB there are two cooperating encoders which separately observe $X_1$ and $X_2$ and a third node which can listen to the exchanges between the two encoders in order to obtain information about a hidden variable $Y$.

Cannot find the paper you are looking for? You can Submit a new open access paper.