no code implementations • CL (ACL) 2022 • Denis Paperno
Can recurrent neural nets, inspired by human sequential data processing, learn to understand language?
no code implementations • SemEval (NAACL) 2022 • Timothee Mickus, Kees Van Deemter, Mathieu Constant, Denis Paperno
Word embeddings have advanced the state of the art in NLP across numerous tasks.
no code implementations • CCL 2020 • Lin Li, Kees Van Deemter, Denis Paperno
This paper presents our work in long and short form choice, a significant question of lexical choice, which plays an important role in many Natural Language Understanding tasks.
no code implementations • 18 Oct 2023 • Timothee Mickus, Elaine Zosa, Denis Paperno
Grounding has been argued to be a crucial component towards the development of more complete and truly semantically competent artificial intelligence systems.
no code implementations • 20 Sep 2023 • Claudia Tagliaferri, Sofia Axioti, Albert Gatt, Denis Paperno
Derivationally related words, such as "runner" and "running", exhibit semantic differences which also elicit different visual scenarios.
1 code implementation • 4 Jun 2023 • Aleksey Tikhonov, Lisa Bylinina, Denis Paperno
Multimodal embeddings aim to enrich the semantic information in neural representations of language compared to text-only models.
no code implementations • 17 Dec 2022 • Shaomu Tan, Denis Paperno
In many real-world scenarios, the absence of external knowledge source like Wikipedia restricts question answering systems to rely on latent internal knowledge in limited dialogue data.
no code implementations • 10 Oct 2022 • Sofia Nikiforova, Tejaswini Deoskar, Denis Paperno, Yoad Winter
Our approach includes a novel way of using image location to identify relevant open-domain facts in an external knowledge base, with their subsequent integration into the captioning pipeline at both the encoding and decoding stages.
no code implementations • 7 Jun 2022 • Timothee Mickus, Denis Paperno, Mathieu Constant
Pretrained embeddings based on the Transformer architecture have taken the NLP community by storm.
1 code implementation • 27 May 2022 • Timothee Mickus, Kees Van Deemter, Mathieu Constant, Denis Paperno
Word embeddings have advanced the state of the art in NLP across numerous tasks.
no code implementations • 17 Aug 2021 • Timothee Mickus, Mathieu Constant, Denis Paperno
Can language models learn grounded representations from text distribution alone?
1 code implementation • 7 Dec 2020 • Timothee Mickus, Timothée Bernard, Denis Paperno
Compositionality is a widely discussed property of natural languages, although its exact definition has been elusive.
no code implementations • COLING 2020 • Sofia Nikiforova, Tejaswini Deoskar, Denis Paperno, Yoad Winter
Standard image caption generation systems produce generic descriptions of images and do not utilize any contextual information or world knowledge.
no code implementations • COLING 2020 • Timothee Mickus, Timoth{\'e}e Bernard, Denis Paperno
Compositionality is a widely discussed property of natural languages, although its exact definition has been elusive.
no code implementations • JEPTALNRECITAL 2020 • Timothee Mickus, Mathieu Constant, Denis Paperno
La g{\'e}n{\'e}ration de d{\'e}finitions est une t{\^a}che r{\'e}cente qui vise {\`a} produire des d{\'e}finitions lexicographiques {\`a} partir de plongements lexicaux.
no code implementations • 13 Nov 2019 • Timothee Mickus, Denis Paperno, Mathieu Constant, Kees Van Deemter
Contextualized word embeddings, i. e. vector representations for words in context, are naturally seen as an extension of previous noncontextual distributional semantic models.
no code implementations • WS 2019 • Timothee Mickus, Denis Paperno, Mathieu Constant
Defining words in a textual context is a useful task both for practical purposes and for gaining insight into distributed word representations.
no code implementations • WS 2019 • Lin Li, Kees Van Deemter, Denis Paperno, Jingyu Fan
Between 80{\%} and 90{\%} of all Chinese words have long and short form such as 老虎/虎 (lao-hu/hu , tiger) (Duanmu:2013).
1 code implementation • WS 2018 • Denis Paperno
Can recurrent neural nets, inspired by human sequential data processing, learn to understand language?
no code implementations • SEMEVAL 2018 • Alicia Krebs, Aless Lenci, ro, Denis Paperno
This paper describes the SemEval 2018 Task 10 on Capturing Discriminative Attributes.
no code implementations • 15 Mar 2018 • Alexander Panchenko, Natalia Loukachevitch, Dmitry Ustalov, Denis Paperno, Christian Meyer, Natalia Konstantinova
The paper gives an overview of the Russian Semantic Similarity Evaluation (RUSSE) shared task held in conjunction with the Dialogue 2015 conference.
no code implementations • 31 Aug 2017 • Alexander Panchenko, Dmitry Ustalov, Nikolay Arefyev, Denis Paperno, Natalia Konstantinova, Natalia Loukachevitch, Chris Biemann
On the one hand, humans easily make judgments about semantic relatedness.
2 code implementations • ACL 2016 • Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task.
no code implementations • LREC 2016 • Daria Ryzhova, Maria Kyuseva, Denis Paperno
In this paper we present a novel application of compositional distributional semantic models (CDSMs): prediction of lexical typology.
no code implementations • TACL 2015 • German Kruszewski, Denis Paperno, Marco Baroni
Corpus-based distributional semantic models capture degrees of semantic relatedness among the words of very large vocabularies, but have problems with logical phenomena such as entailment, that are instead elegantly handled by model-theoretic approaches, which, in turn, do not scale up.