1 code implementation • 30 Oct 2023 • Heather Lent, Kushal Tatariya, Raj Dabre, Yiyi Chen, Marcell Fekete, Esther Ploeger, Li Zhou, Hans Erik Heje, Diptesh Kanojia, Paul Belony, Marcel Bollmann, Loïc Grobol, Miryam de Lhoneux, Daniel Hershcovich, Michel DeGraff, Anders Søgaard, Johannes Bjerva
Creoles represent an under-explored and marginalized group of languages, with few available resources for NLP research.
no code implementations • 9 Apr 2019 • Marco Dinarelli, Loïc Grobol
During the last couple of years, Recurrent Neural Networks (RNN) have reached state-of-the-art performances on most of the sequence modelling problems.
no code implementations • 16 Sep 2019 • Marco Dinarelli, Loïc Grobol
We propose a neural architecture with the main characteristics of the most successful neural models of the last years: bidirectional RNNs, encoder-decoder, and the Transformer model.
no code implementations • JEP/TALN/RECITAL 2021 • Loïc Grobol, Benoit Crabbé
L’analyseur s’appuie sur de riches représentations lexicales issues notamment de BERT et de FASTTEXT.
no code implementations • HumEval (ACL) 2022 • Mariya Borovikova, Loïc Grobol, Anaïs Halftermeyer, Sylvie Billot
We propose a method for investigating the interpretability of metrics used for the coreference resolution task through comparisons with human judgments.
no code implementations • JEP/TALN/RECITAL 2022 • Maëlle Brassier, Théo Azzouza, Jean-Yves Antoine, Loïc Grobol, Anaïs Lefeuvre-Halftermeyer
Nous présentons OFCoRS, un système de résolution de coréférence, basé sur le français parlé et un ensemble de modèles Random Forest.
no code implementations • LREC 2022 • Loïc Grobol, Mathilde Regnault, Pedro Ortiz Suarez, Benoît Sagot, Laurent Romary, Benoit Crabbé
The successes of contextual word embeddings learned by training large-scale language models, while remarkable, have mostly occurred for languages where significant amounts of raw texts are available and where annotated data in downstream tasks have a relatively regular spelling.