1 code implementation • 26 Feb 2024 • Silin Gao, Mete Ismayilzada, Mengjie Zhao, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut
Inferring contextually-relevant and diverse commonsense to understand narratives remains challenging for knowledge models.
no code implementations • 6 Jan 2024 • Shaobo Cui, Lazar Milikic, Yiyang Feng, Mete Ismayilzada, Debjit Paul, Antoine Bosselut, Boi Faltings
CESAR achieves a significant 69. 7% relative improvement over existing metrics, increasing from 47. 2% to 80. 1% in capturing the causal strength change brought by supporters and defeaters.
1 code implementation • 23 Oct 2023 • Mete Ismayilzada, Debjit Paul, Syrielle Montariol, Mor Geva, Antoine Bosselut
Recent efforts in natural language processing (NLP) commonsense reasoning research have yielded a considerable number of new datasets and benchmarks.
1 code implementation • 4 Apr 2023 • Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings
Language models (LMs) have recently shown remarkable performance on reasoning tasks by explicitly generating intermediate inferences, e. g., chain-of-thought prompting.
1 code implementation • 15 Nov 2022 • Mete Ismayilzada, Antoine Bosselut
We also include helper functions for converting natural language texts into a format ingestible by knowledge models - intermediate pipeline stages such as knowledge head extraction from text, heuristic and model-based knowledge head-relation matching, and an ability to define and use custom knowledge relations.