no code implementations • 4 Jul 2024 • Patrícia Pereira, Helena Moniz, Joao Paulo Carvalho
In our submissions, we model empathy, emotion polarity and emotion intensity of each utterance in a conversation by feeding the utterance to be classified together with its conversational context, i. e., a certain number of previous conversational turns, as input to an encoder Pre-trained Language Model, to which we append a regression head for prediction.
no code implementations • 8 Sep 2023 • Patrícia Pereira, Rui Ribeiro, Helena Moniz, Luisa Coheur, Joao Paulo Carvalho
Fuzzy Fingerprints have been successfully used as an interpretable text classification technique, but, like most other techniques, have been largely surpassed in performance by Large Pre-trained Language Models, such as BERT or RoBERTa.
1 code implementation • 17 Apr 2023 • Patrícia Pereira, Helena Moniz, Isabel Dias, Joao Paulo Carvalho
The usual approach to model the conversational context has been to produce context-independent representations of each utterance and subsequently perform contextual modeling of these.
Ranked #1 on Emotion Recognition in Conversation on EmoWoz (Macro F1 metric)
no code implementations • 16 Nov 2022 • Patrícia Pereira, Helena Moniz, Joao Paulo Carvalho
This is followed by descriptions of the most prominent works in ERC with explanations of the Deep Learning architectures employed.
1 code implementation • 25 Feb 2021 • Rita Parada Ramos, Patrícia Pereira, Helena Moniz, Joao Paulo Carvalho, Bruno Martins
Despite the use of large training datasets, most models are trained by iterating over single input-output pairs, discarding the remaining examples for the current prediction.