no code implementations • 29 Apr 2023 • Matias Vera, Martin G. Gonzalez, Leonardo Rey Vega
Image reconstruction in optoacoustic tomography (OAT) is a trending learning task highly dependent on measured physical magnitudes present at sensing time.
no code implementations • 28 Apr 2023 • Juan Augusto Maya, Leonardo Rey Vega, Andrea M. Tonello
Nevertheless, its asymptotic performance is proved to be identical to the original GLRT, showing that the statistically dependence of the measurements has no impact on the detection performance in the asymptotic scenario.
no code implementations • 15 Mar 2023 • Lautaro Estienne, Matias Vera, Leonardo Rey Vega
In this work we perform a study on the ability of a classification system trained with a large database of product reviews to generalize to different Spanish domains.
no code implementations • 9 Nov 2022 • Juan Augusto Maya, Leonardo Rey Vega, Andrea M. Tonello
When the source is present, the computation of the joint PDF of the energy measurements at the nodes is a challenging problem.
1 code implementation • 14 Oct 2022 • Martin G. Gonzalez, Matias Vera, Leonardo Rey Vega
In this paper we consider the problem of image reconstruction in optoacoustic tomography.
1 code implementation • 30 Mar 2022 • Georg Pichler, Marco Romanelli, Leonardo Rey Vega, Pablo Piantanida
Federated Learning is expected to provide strong privacy guarantees, as only gradients or model parameters but no plain text training data is ever exchanged either between the clients or between the clients and the central server.
no code implementations • 10 Dec 2021 • Matias Vera, Leonardo Rey Vega, Pablo Piantanida
In this work, we introduce an analysis based on point-wise PAC approach over the generalization gap considering the mismatch of testing based on the accuracy metric and training on the negative log-loss.
no code implementations • 22 Oct 2020 • Matias Vera, Leonardo Rey Vega, Pablo Piantanida
In practice, this behaviour is controlled by various--sometimes heuristics--regularization techniques, which are motivated by developing upper bounds to the generalization error.
no code implementations • 5 Mar 2020 • Juan Maya, Leonardo Rey Vega
Interestingly, despite the fact that the L-MP is simpler and more efficient to implement than the GLR test, we obtain some conditions under which the L-MP has superior asymptotic performance to the GLR test.
no code implementations • 28 May 2019 • Matias Vera, Pablo Piantanida, Leonardo Rey Vega
Our main result is that the testing gap between the empirical cross-entropy and its statistical expectation (measured with respect to the testing probability law) can be bounded with high probability by the mutual information between the input testing samples and the corresponding representations, generated by the encoder obtained at training time.
no code implementations • 14 Feb 2018 • Matías Vera, Pablo Piantanida, Leonardo Rey Vega
This paper presents a sample-dependent bound on the generalization gap of the cross-entropy loss that scales with the information complexity (IC) of the representations, meaning the mutual information between inputs and their representations.
no code implementations • 19 Nov 2017 • Matías Vera, Leonardo Rey Vega, Pablo Piantanida
This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i. e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels).
no code implementations • 5 Apr 2016 • Matías Vera, Leonardo Rey Vega, Pablo Piantanida
On the other hand, in CDIB there are two cooperating encoders which separately observe $X_1$ and $X_2$ and a third node which can listen to the exchanges between the two encoders in order to obtain information about a hidden variable $Y$.