1 code implementation • COLING 2020 • Carina Silberer, Sina Zarrie{\ss}, Matthijs Westera, Gemma Boleda
We also find that standard evaluations underestimate the actual effectiveness of the naming model: on the single-label names of the original dataset (Visual Genome), it obtains −27{\%} accuracy points than on MN v2, that includes all valid object names.
no code implementations • ACL 2020 • Simeon Sch{\"u}z, Sina Zarrie{\ss}
In human cognition, world knowledge supports the perception of object colours: knowing that trees are typically green helps to perceive their colour in certain contexts.
no code implementations • LREC 2020 • Carina Silberer, Sina Zarrie{\ss}, Gemma Boleda
We highlight the challenges involved and provide a preliminary analysis of the ManyNames data, showing that there is a high level of agreement in naming, on average.
no code implementations • WS 2019 • Ting Han, Sina Zarrie{\ss}
A lot of recent work in Language {\&} Vision has looked at generating descriptions or referring expressions for objects in scenes of real-world images, though focusing mostly on relatively simple language like object names, color and location attributes (e. g., brown chair on the left).
no code implementations • WS 2019 • Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen
We present a dataset consisting of what we call image description sequences, which are multi-sentence descriptions of the contents of an image.
no code implementations • WS 2018 • Sina Zarrie{\ss}, David Schlangen
Modeling traditional NLG tasks with data-driven techniques has been a major focus of research in NLG in the past decade.
no code implementations • WS 2018 • Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen
Image captioning models are typically trained on data that is collected from people who are asked to describe an image, without being given any further task context.
no code implementations • WS 2018 • Sina Zarrie{\ss}, David Schlangen
In this work, we assess decoding strategies for referring expression generation with neural models.
no code implementations • WS 2017 • Sina Zarrie{\ss}, M. Soledad L{\'o}pez Gambino, David Schlangen
Current referring expression generation systems mostly deliver their output as one-shot, written expressions.
no code implementations • EMNLP 2017 • Sina Zarrie{\ss}, David Schlangen
Corpora of referring expressions paired with their visual referents are a good source for learning word meanings directly grounded in visual representations.
no code implementations • WS 2017 • Kyle Richardson, Sina Zarrie{\ss}, Jonas Kuhn
We propose a new shared task for tactical data-to-text generation in the domain of source code libraries.
no code implementations • WS 2017 • Soledad L{\'o}pez Gambino, Sina Zarrie{\ss}, David Schlangen
A common convention in graphical user interfaces is to indicate a {``}wait state{''}, for example while a program is preparing a response, through a changed cursor state or a progress bar.
no code implementations • ACL 2017 • Sina Zarrie{\ss}, David Schlangen
We present a model that learns individual predictors for object names that link visual and distributional aspects of word meaning during training.
no code implementations • EACL 2017 • Sina Zarrie{\ss}, David Schlangen
There has recently been a lot of work trying to use images of referents of words for improving vector space meaning representations derived from text.
no code implementations • LREC 2016 • Sina Zarrie{\ss}, Julian Hough, Casey Kennington, Ramesh Manuvinakurike, David DeVault, Raquel Fern{\'a}ndez, David Schlangen
PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings.
no code implementations • LREC 2012 • Patrick Ziering, Sina Zarrie{\ss}, Jonas Kuhn
In this paper, we investigate the usage of a non-canonical German passive alternation for ditransitive verbs, the recipient passive, in naturally occuring corpus data.