Search Results for author: Sina Zarrie{\ss}

Found 22 papers, 1 papers with code

Humans Meet Models on Object Naming: A New Dataset and Analysis

1 code implementation COLING 2020 Carina Silberer, Sina Zarrie{\ss}, Matthijs Westera, Gemma Boleda

We also find that standard evaluations underestimate the actual effectiveness of the naming model: on the single-label names of the original dataset (Visual Genome), it obtains −27{\%} accuracy points than on MN v2, that includes all valid object names.

Object valid

Knowledge Supports Visual Language Grounding: A Case Study on Colour Terms

no code implementations ACL 2020 Simeon Sch{\"u}z, Sina Zarrie{\ss}

In human cognition, world knowledge supports the perception of object colours: knowing that trees are typically green helps to perceive their colour in certain contexts.

Object Visual Grounding +1

Object Naming in Language and Vision: A Survey and a New Dataset

no code implementations LREC 2020 Carina Silberer, Sina Zarrie{\ss}, Gemma Boleda

We highlight the challenges involved and provide a preliminary analysis of the ManyNames data, showing that there is a high level of agreement in naming, on average.

Object

Sketch Me if You Can: Towards Generating Detailed Descriptions of Object Shape by Grounding in Images and Drawings

no code implementations WS 2019 Ting Han, Sina Zarrie{\ss}

A lot of recent work in Language {\&} Vision has looked at generating descriptions or referring expressions for objects in scenes of real-world images, though focusing mostly on relatively simple language like object names, color and location attributes (e. g., brown chair on the left).

Attribute Image Captioning +1

Tell Me More: A Dataset of Visual Scene Description Sequences

no code implementations WS 2019 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

We present a dataset consisting of what we call image description sequences, which are multi-sentence descriptions of the contents of an image.

Sentence

The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description

no code implementations WS 2018 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

Image captioning models are typically trained on data that is collected from people who are asked to describe an image, without being given any further task context.

Image Captioning Text Generation

Deriving continous grounded meaning representations from referentially structured multimodal contexts

no code implementations EMNLP 2017 Sina Zarrie{\ss}, David Schlangen

Corpora of referring expressions paired with their visual referents are a good source for learning word meanings directly grounded in visual representations.

Attribute Word Embeddings

The Code2Text Challenge: Text Generation in Source Libraries

no code implementations WS 2017 Kyle Richardson, Sina Zarrie{\ss}, Jonas Kuhn

We propose a new shared task for tactical data-to-text generation in the domain of source code libraries.

Data-to-Text Generation

Beyond On-hold Messages: Conversational Time-buying in Task-oriented Dialogue

no code implementations WS 2017 Soledad L{\'o}pez Gambino, Sina Zarrie{\ss}, David Schlangen

A common convention in graphical user interfaces is to indicate a {``}wait state{''}, for example while a program is preparing a response, through a changed cursor state or a progress bar.

Obtaining referential word meanings from visual and distributional information: Experiments on object naming

no code implementations ACL 2017 Sina Zarrie{\ss}, David Schlangen

We present a model that learns individual predictors for object names that link visual and distributional aspects of word meaning during training.

Object Object Recognition +4

A Corpus-based Study of the German Recipient Passive

no code implementations LREC 2012 Patrick Ziering, Sina Zarrie{\ss}, Jonas Kuhn

In this paper, we investigate the usage of a non-canonical German passive alternation for ditransitive verbs, the recipient passive, in naturally occuring corpus data.

Cannot find the paper you are looking for? You can Submit a new open access paper.