Search Results for author: Matías Vera

Found 4 papers, 0 papers with code

Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models

no code implementations13 Jul 2023 Lautaro Estienne, Luciana Ferrer, Matías Vera, Pablo Piantanida

These models are usually trained with a very large amount of unsupervised text data and adapted to perform a downstream natural language task using methods like fine-tuning, calibration or in-context learning.

In-Context Learning text-classification +1

The Role of Information Complexity and Randomization in Representation Learning

no code implementations14 Feb 2018 Matías Vera, Pablo Piantanida, Leonardo Rey Vega

This paper presents a sample-dependent bound on the generalization gap of the cross-entropy loss that scales with the information complexity (IC) of the representations, meaning the mutual information between inputs and their representations.

Representation Learning

Compression-Based Regularization with an Application to Multi-Task Learning

no code implementations19 Nov 2017 Matías Vera, Leonardo Rey Vega, Pablo Piantanida

This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i. e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels).

Multi-Task Learning Text Categorization

Collaborative Information Bottleneck

no code implementations5 Apr 2016 Matías Vera, Leonardo Rey Vega, Pablo Piantanida

On the other hand, in CDIB there are two cooperating encoders which separately observe $X_1$ and $X_2$ and a third node which can listen to the exchanges between the two encoders in order to obtain information about a hidden variable $Y$.

Cannot find the paper you are looking for? You can Submit a new open access paper.