Search Results for author: Nikolai Ilinykh

Found 12 papers, 3 papers with code

When an Image Tells a Story: The Role of Visual and Semantic Information for Generating Paragraph Descriptions

no code implementations INLG (ACL) 2020 Nikolai Ilinykh, Simon Dobnik

Generating multi-sentence image descriptions is a challenging task, which requires a good model to produce coherent and accurate paragraphs, describing salient objects in the image.

Image Paragraph Captioning Sentence

In Search of Meaning and Its Representations for Computational Linguistics

no code implementations CLASP 2022 Simon Dobnik, Robin Cooper, Adam Ek, Bill Noble, Staffan Larsson, Nikolai Ilinykh, Vladislav Maraev, Vidya Somashekarappa

In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.

Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer

1 code implementation Findings (ACL) 2022 Nikolai Ilinykh, Simon Dobnik

We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and cross-modal attention (information fusion).

Text Generation Visual Grounding

Describe me an Aucklet: Generating Grounded Perceptual Category Descriptions

1 code implementation7 Mar 2023 Bill Noble, Nikolai Ilinykh

Human speakers can generate descriptions of perceptual concepts, abstracted from the instance-level.

nlg evaluation Representation Learning +1

We went to look for meaning and all we got were these lousy representations: aspects of meaning representation for computational semantics

no code implementations10 Sep 2021 Simon Dobnik, Robin Cooper, Adam Ek, Bill Noble, Staffan Larsson, Nikolai Ilinykh, Vladislav Maraev, Vidya Somashekarappa

In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.

Tell Me More: A Dataset of Visual Scene Description Sequences

no code implementations WS 2019 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

We present a dataset consisting of what we call image description sequences, which are multi-sentence descriptions of the contents of an image.

Sentence

MeetUp! A Corpus of Joint Activity Dialogues in a Visual Environment

no code implementations11 Jul 2019 Nikolai Ilinykh, Sina Zarrieß, David Schlangen

Building computer systems that can converse about their visual environment is one of the oldest concerns of research in Artificial Intelligence and Computational Linguistics (see, for example, Winograd's 1972 SHRDLU system).

The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description

no code implementations WS 2018 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

Image captioning models are typically trained on data that is collected from people who are asked to describe an image, without being given any further task context.

Image Captioning Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.