no code implementations • INLG (ACL) 2021 • Jani Järnfors, Guanyi Chen, Kees Van Deemter, Rint Sybesma
Choosing the most suitable classifier in a linguistic context is a well-known problem in the production of Mandarin and many other languages.
no code implementations • SemEval (NAACL) 2022 • Timothee Mickus, Kees Van Deemter, Mathieu Constant, Denis Paperno
Word embeddings have advanced the state of the art in NLP across numerous tasks.
no code implementations • INLG (ACL) 2020 • Emiel van Miltenburg, Wei-Ting Lu, Emiel Krahmer, Albert Gatt, Guanyi Chen, Lin Li, Kees Van Deemter
Because our manipulated descriptions form minimal pairs with the reference descriptions, we are able to assess the impact of different kinds of errors on the perceived quality of the descriptions.
no code implementations • EMNLP (CODI) 2020 • Fahime Same, Kees Van Deemter
First, we discuss the most common linguistic perspectives on the concept of recency and propose a taxonomy of recency metrics employed in Machine Learning studies for choosing the form of referring expressions in discourse context.
no code implementations • ACL (NL4XAI, INLG) 2020 • Alexandra Mayn, Kees Van Deemter
While the problem of natural language generation from logical formulas has a long tradition, thus far little attention has been paid to ensuring that the generated explanations are optimally effective for the user.
no code implementations • CCL 2020 • Lin Li, Kees Van Deemter, Denis Paperno
This paper presents our work in long and short form choice, a significant question of lexical choice, which plays an important role in many Natural Language Understanding tasks.
1 code implementation • 7 Mar 2024 • Yuqi Liu, Guanyi Chen, Kees Van Deemter
In this paper, we focus on the omission of the plurality and definiteness markers in Chinese noun phrases (NPs) to investigate the predictability of their intended meaning given the contexts.
no code implementations • 12 Feb 2024 • Guanyi Chen, Fahime Same, Kees Van Deemter
Recently, a human evaluation study of Referring Expression Generation (REG) models had an unexpected conclusion: on \textsc{webnlg}, Referring Expressions (REs) generated by the state-of-the-art neural models were not only indistinguishable from the REs in \textsc{webnlg} but also from the REs generated by a simple rule-based system.
no code implementations • 17 Jan 2024 • Kittipitch Kuptavanich, Ehud Reiter, Kees Van Deemter, Advaith Siddharthan
We are developing techniques to generate summary descriptions of sets of objects.
no code implementations • 15 Jan 2024 • Kees Van Deemter
To substantiate this claim, I examine current classifications of hallucination and omission in Data-text NLG, and I propose a logic-based synthesis of these classfications.
1 code implementation • 27 Jul 2023 • Fahime Same, Guanyi Chen, Kees Van Deemter
We conclude that GREC can no longer be regarded as offering a reliable assessment of models' ability to mimic human reference production, because the results are highly impacted by the choice of corpus and evaluation metrics.
no code implementations • 23 May 2023 • Bart Holterman, Kees Van Deemter
Theory of Mind (ToM) is the ability to understand human thinking and decision-making, an ability that plays a crucial role in social interaction between people, including linguistic communication.
no code implementations • 2 May 2023 • Anya Belz, Craig Thomson, Ehud Reiter, Gavin Abercrombie, Jose M. Alonso-Moral, Mohammad Arvan, Anouck Braggaar, Mark Cieliebak, Elizabeth Clark, Kees Van Deemter, Tanvi Dinkar, Ondřej Dušek, Steffen Eger, Qixiang Fang, Mingqi Gao, Albert Gatt, Dimitra Gkatzia, Javier González-Corbelle, Dirk Hovy, Manuela Hürlimann, Takumi Ito, John D. Kelleher, Filip Klubicka, Emiel Krahmer, Huiyuan Lai, Chris van der Lee, Yiru Li, Saad Mahamood, Margot Mieskes, Emiel van Miltenburg, Pablo Mosteiro, Malvina Nissim, Natalie Parde, Ondřej Plátek, Verena Rieser, Jie Ruan, Joel Tetreault, Antonio Toral, Xiaojun Wan, Leo Wanner, Lewis Watson, Diyi Yang
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible.
no code implementations • 28 Apr 2023 • Michele Cafagna, Lina M. Rojas-Barahona, Kees Van Deemter, Albert Gatt
When applied to Image-to-text models, interpretability methods often provide token-by-token explanations namely, they compute a visual explanation for each token of the generated sequence.
1 code implementation • 23 Feb 2023 • Michele Cafagna, Kees Van Deemter, Albert Gatt
We present the High-Level Dataset a dataset extending 14997 images from the COCO dataset, aligned with a new set of 134, 973 human-annotated (high-level) captions collected along three axes: scenes, actions, and rationales.
no code implementations • 9 Nov 2022 • Michele Cafagna, Kees Van Deemter, Albert Gatt
Image captioning models tend to describe images in an object-centric way, emphasising visible objects.
no code implementations • 10 Oct 2022 • Guanyi Chen, Fahime Same, Kees Van Deemter
Previous work on Neural Referring Expression Generation (REG) all uses WebNLG, an English dataset that has been shown to reflect a very limited range of referring expression (RE) use.
no code implementations • 24 Sep 2022 • Guanyi Chen, Kees Van Deemter
We introduce a corpus of short texts in Mandarin, in which quantified expressions figure prominently.
no code implementations • 13 Sep 2022 • Kees Van Deemter
A key aim of science is explanation, yet the idea of explaining language phenomena has taken a backseat in mainstream Natural Language Processing (NLP) and many other areas of Artificial Intelligence.
1 code implementation • 27 May 2022 • Timothee Mickus, Kees Van Deemter, Mathieu Constant, Denis Paperno
Word embeddings have advanced the state of the art in NLP across numerous tasks.
no code implementations • ACL 2022 • Fahime Same, Guanyi Chen, Kees Van Deemter
In recent years, neural models have often outperformed rule-based and classic Machine Learning approaches in NLG.
no code implementations • 15 Sep 2021 • Michele Cafagna, Kees Van Deemter, Albert Gatt
Images can be described in terms of the objects they contain, or in terms of the types of scene or place that they instantiate.
no code implementations • INLG (ACL) 2021 • Guanyi Chen, Fahime Same, Kees Van Deemter
Despite achieving encouraging results, neural Referring Expression Generation models are often thought to lack transparency.
no code implementations • COLING 2020 • Fahime Same, Kees Van Deemter
This paper reports on a structured evaluation of feature-based Machine Learning algorithms for selecting the form of a referring expression in discourse context.
no code implementations • INLG (ACL) 2020 • Guanyi Chen, Kees Van Deemter
In the present paper, we annotate this corpus, evaluate classic REG algorithms on it, and compare the results with earlier results on the evaluation of REG for English referring expressions.
no code implementations • 16 May 2020 • Xiao Li, Kees Van Deemter, Chenghua Lin
Recent years have seen a number of proposals for performing Natural Language Generation (NLG) based in large part on statistical techniques.
no code implementations • 13 Nov 2019 • Timothee Mickus, Denis Paperno, Mathieu Constant, Kees Van Deemter
Contextualized word embeddings, i. e. vector representations for words in context, are naturally seen as an extension of previous noncontextual distributional semantic models.
no code implementations • WS 2019 • Lin Li, Kees Van Deemter, Denis Paperno, Jingyu Fan
Between 80{\%} and 90{\%} of all Chinese words have long and short form such as 老虎/虎 (lao-hu/hu , tiger) (Duanmu:2013).
1 code implementation • WS 2019 • Guanyi Chen, Kees Van Deemter, Silvia Pagliaro, Louk Smalbil, Chenghua Lin
To inform these algorithms, we conducted on a series of elicitation experiments in which human speakers were asked to perform a linguistic task that invites the use of quantified expressions.
no code implementations • WS 2019 • Guanyi Chen, Kees Van Deemter, Chenghua Lin
Quantified expressions have always taken up a central position in formal theories of meaning and language use.
no code implementations • WS 2018 • Kittipitch Kuptavanich, Ehud Reiter, Kees Van Deemter, Advaith Siddharthan
We explored the task of creating a textual summary describing a large set of objects characterised by a small number of features using an e-commerce dataset.
no code implementations • WS 2018 • Xiao Li, Kees Van Deemter, Chenghua Lin
This paper argues that a new generic approach to statistical NLG can be made to perform Referring Expression Generation (REG) successfully.
1 code implementation • WS 2018 • Guanyi Chen, Kees Van Deemter, Chenghua Lin
We introduce SimpleNLG-ZH, a realisation engine for Mandarin that follows the software design paradigm of SimpleNLG (Gatt and Reiter, 2009).
no code implementations • WS 2018 • Guanyi Chen, Kees Van Deemter, Chenghua Lin
We extend the classic Referring Expressions Generation task by considering zero pronouns in {``}pro-drop{''} languages such as Chinese, modelling their use by means of the Bayesian Rational Speech Acts model (Frank and Goodman, 2012).
no code implementations • WS 2018 • Alejandro Ramos-Soto, Ehud Reiter, Kees Van Deemter, Jose M. Alonso, Albert Gatt
We present a data resource which can be useful for research purposes on language grounding tasks in the context of geographical referring expression generation.
no code implementations • WS 2017 • Kees van Deemter, Le Sun, Rint Sybesma, Xiao Li, Bo Chen, Muyun Yang
East Asian languages are thought to handle reference differently from languages such as English, particularly in terms of the marking of definiteness and number.
no code implementations • 30 Mar 2017 • Alejandro Ramos-Soto, Jose M. Alonso, Ehud Reiter, Kees Van Deemter, Albert Gatt
We present a novel heuristic approach that defines fuzzy geographical descriptors using data gathered from a survey with human subjects.