Search Results for author: David Schlangen

Found 64 papers, 6 papers with code

Annotating anaphoric phenomena in situated dialogue

no code implementations ACL (mmsr, IWCS) 2021 Sharid Loáiciga, Simon Dobnik, David Schlangen

With this paper, we intend to start a discussion on the annotation of referential phenomena in situated dialogue.

Coreference Resolution

Reference and coreference in situated dialogue

no code implementations NAACL (ALVR) 2021 Sharid Loáiciga, Simon Dobnik, David Schlangen

We argue that there is still significant room for corpora that increase the complexity of both visual and linguistic domains and which capture different varieties of perceptual and conversational contexts.

Incremental Unit Networks for Multimodal, Fine-grained Information State Representation

no code implementations ACL (mmsr, IWCS) 2021 Casey Kennington, David Schlangen

We offer a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting.

Norm Participation Grounds Language

no code implementations6 Jun 2022 David Schlangen

The striking recent advances in eliciting seemingly meaningful language behaviour from language-only machine learning models have only made more apparent, through the surfacing of clear limitations, the need to go beyond the language-only mode and to ground these models "in the world".

Can Visual Dialogue Models Do Scorekeeping? Exploring How Dialogue Representations Incrementally Encode Shared Knowledge

no code implementations ACL 2022 Brielen Madureira, David Schlangen

Our conclusion is that the ability to make the distinction between shared and privately known statements along the dialogue is moderately present in the analysed models, but not always incrementally consistent, which may partially be due to the limited need for grounding interactions in the original task.

The slurk Interaction Server Framework: Better Data for Better Dialog Models

no code implementations2 Feb 2022 Jana Götze, Maike Paetzel-Prüsmann, Wencke Liermann, Tim Diekmann, David Schlangen

This paper presents the slurk software, a lightweight interaction server for setting up dialog data collections and running experiments.

Space Efficient Context Encoding for Non-Task-Oriented Dialogue Generation with Graph Attention Transformer

1 code implementation ACL 2021 Fabian Galetzka, Jewgeni Rose, David Schlangen, Jens Lehmann

To improve the coherence and knowledge retrieval capabilities of non-task-oriented dialogue systems, recent Transformer-based models aim to integrate fixed background context.

Dialogue Generation Graph Attention +2

Is Incoherence Surprising? Targeted Evaluation of Coherence Prediction from Language Models

1 code implementation NAACL 2021 Anne Beyer, Sharid Loáiciga, David Schlangen

Coherent discourse is distinguished from a mere collection of utterances by the satisfaction of a diverse set of constraints, for example choice of expression, logical relation between denoted events, and implicit compatibility with world-knowledge.

Coherence Evaluation Language Modelling

Targeting the Benchmark: On Methodology in Current Natural Language Processing Research

no code implementations ACL 2021 David Schlangen

It has become a common pattern in our field: One group introduces a language task, exemplified by a dataset, which they argue is challenging enough to serve as a benchmark.

Natural Language Processing

A Corpus of Controlled Opinionated and Knowledgeable Movie Discussions for Training Neural Conversation Models

1 code implementation LREC 2020 Fabian Galetzka, Chukwuemeka U. Eneh, David Schlangen

Fully data driven Chatbots for non-goal oriented dialogues are known to suffer from inconsistent behaviour across their turns, stemming from a general difficulty in controlling parameters like their assumed background personality and knowledge of facts.

Can Neural Image Captioning be Controlled via Forced Attention?

no code implementations WS 2019 Philipp Sadler, Tatjana Scheffler, David Schlangen

Learned dynamic weighting of the conditioning signal (attention) has been shown to improve neural language generation in a variety of settings.

Image Captioning Text Generation

Tell Me More: A Dataset of Visual Scene Description Sequences

no code implementations WS 2019 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

We present a dataset consisting of what we call image description sequences, which are multi-sentence descriptions of the contents of an image.

From Explainability to Explanation: Using a Dialogue Setting to Elicit Annotations with Justifications

no code implementations WS 2019 Nazia Attari, Martin Heckmann, David Schlangen

Despite recent attempts in the field of explainable AI to go beyond black box prediction models, typically already the training data for supervised machine learning is collected in a manner that treats the annotator as a {``}black box{''}, the internal workings of which remains unobserved.

Grounded Agreement Games: Emphasizing Conversational Grounding in Visual Dialogue Settings

no code implementations29 Aug 2019 David Schlangen

Where early work on dialogue in Computational Linguistics put much emphasis on dialogue structure and its relation to the mental states of the dialogue participants (e. g., Allen 1979, Grosz & Sidner 1986), current work mostly reduces dialogue to the task of producing at any one time a next utterance; e. g. in neural chatbot or Visual Dialogue settings.

Chatbot Visual Dialog

Language Tasks and Language Games: On Methodology in Current Natural Language Processing Research

no code implementations28 Aug 2019 David Schlangen

"This paper introduces a new task and a new dataset", "we improve the state of the art in X by Y" -- it is rare to find a current natural language processing paper (or AI paper more generally) that does not contain such statements.

Natural Language Processing

MeetUp! A Corpus of Joint Activity Dialogues in a Visual Environment

no code implementations11 Jul 2019 Nikolai Ilinykh, Sina Zarrieß, David Schlangen

Building computer systems that can converse about their visual environment is one of the oldest concerns of research in Artificial Intelligence and Computational Linguistics (see, for example, Winograd's 1972 SHRDLU system).

Natural Language Processing

Natural Language Semantics With Pictures: Some Language \& Vision Datasets and Potential Uses for Computational Semantics

no code implementations WS 2019 David Schlangen

Propelling, and propelled by, the {``}deep learning revolution{''}, recent years have seen the introduction of ever larger corpora of images annotated with natural language expressions.

Natural Language Semantics With Pictures: Some Language & Vision Datasets and Potential Uses for Computational Semantics

no code implementations15 Apr 2019 David Schlangen

Propelling, and propelled by, the "deep learning revolution", recent years have seen the introduction of ever larger corpora of images annotated with natural language expressions.

The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description

no code implementations WS 2018 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

Image captioning models are typically trained on data that is collected from people who are asked to describe an image, without being given any further task context.

Image Captioning Text Generation

Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieval Task

no code implementations IJCNLP 2017 Ting Han, David Schlangen

While language conveys meaning largely symbolically, actual communication acts typically contain iconic elements as well: People gesture while they speak, or may even draw sketches while explaining something.

Image Retrieval

Deriving continous grounded meaning representations from referentially structured multimodal contexts

no code implementations EMNLP 2017 Sina Zarrie{\ss}, David Schlangen

Corpora of referring expressions paired with their visual referents are a good source for learning word meanings directly grounded in visual representations.

Word Embeddings

Beyond On-hold Messages: Conversational Time-buying in Task-oriented Dialogue

no code implementations WS 2017 Soledad L{\'o}pez Gambino, Sina Zarrie{\ss}, David Schlangen

A common convention in graphical user interfaces is to indicate a {``}wait state{''}, for example while a program is preparing a response, through a changed cursor state or a progress bar.

Joint, Incremental Disfluency Detection and Utterance Segmentation from Speech

no code implementations EACL 2017 Julian Hough, David Schlangen

We present the joint task of incremental disfluency detection and utterance segmentation and a simple deep learning system which performs it on transcripts and ASR results.

Speech Recognition

Grounding Language by Continuous Observation of Instruction Following

no code implementations EACL 2017 Ting Han, David Schlangen

Grounded semantics is typically learnt from utterance-level meaning representations (e. g., successful database retrievals, denoted objects in images, moves in a game).

DUEL: A Multi-lingual Multimodal Dialogue Corpus for Disfluency, Exclamations and Laughter

no code implementations LREC 2016 Julian Hough, Ye Tian, Laura de Ruiter, Simon Betz, Spyros Kousidis, David Schlangen, Jonathan Ginzburg

We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese.

Cannot find the paper you are looking for? You can Submit a new open access paper.