1 code implementation • 22 Jul 2024 • Zihao Li, Shaoxiong Ji, Timothee Mickus, Vincent Segonne, Jörg Tiedemann
We ensure that training data and model architectures are comparable, and discuss the downstream performances across 6 languages that we observe in probing and fine-tuning scenarios.
no code implementations • 25 Mar 2024 • Shaoxiong Ji, Timothee Mickus, Vincent Segonne, Jörg Tiedemann
We furthermore provide evidence through similarity measures and investigation of parameters that this lack of positive influence is due to output separability -- which we argue is of use for machine translation but detrimental elsewhere.
no code implementations • 12 Mar 2024 • Timothee Mickus, Elaine Zosa, Raúl Vázquez, Teemu Vahtola, Jörg Tiedemann, Vincent Segonne, Alessandro Raganato, Marianna Apidianaki
This paper presents the results of the SHROOM, a shared task focused on detecting hallucinations: outputs from natural language generation (NLG) systems that are fluent, yet inaccurate.
no code implementations • 14 Jun 2023 • Vincent Segonne, Timothee Mickus
Definition Modeling, the task of generating definitions, was first proposed as a means to evaluate the semantic quality of word embeddings-a coherent lexical semantic representations of a word in context should contain all the information necessary to generate its definition.
1 code implementation • JEPTALNRECITAL 2020 • Hang Le, Lo{\"\i}c Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alex Allauzen, re, Beno{\^\i}t Crabb{\'e}, Laurent Besacier, Didier Schwab
Les mod{\`e}les de langue pr{\'e}-entra{\^\i}n{\'e}s sont d{\'e}sormais indispensables pour obtenir des r{\'e}sultats {\`a} l{'}{\'e}tat-de-l{'}art dans de nombreuses t{\^a}ches du TALN.
no code implementations • LREC 2020 • Lucie Barque, Pauline Haas, Richard Huyghe, Delphine Tribout, C, Marie ito, Benoit Crabb{\'e}, Vincent Segonne
French, as many languages, lacks semantically annotated corpus data.
7 code implementations • LREC 2020 • Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.
Ranked #2 on
Natural Language Inference
on XNLI French
no code implementations • JEPTALNRECITAL 2019 • Olga Seminck, Vincent Segonne, Pascal Amsili
Les performances que nous obtenons, surtout compar{\'e}es {\`a} celles de Amsili {\&} Seminck (2017b), sugg{\`e}rent que l{'}approche par mod{\`e}le de langue des sch{\'e}mas Winograd reste limit{\'e}e, sans doute en partie {\`a} cause du fait que les mod{\`e}les de langue encodent tr{\`e}s difficilement le genre de raisonnement n{\'e}cessaire {\`a} la r{\'e}solution des sch{\'e}mas Winograd.
no code implementations • WS 2019 • Vincent Segonne, C, Marie ito, Beno{\^\i}t Crabb{\'e}
In this paper, we investigate which strategy to adopt to achieve WSD for languages lacking data that was annotated specifically for the task, focusing on the particular case of verb disambiguation in French.