Search Results for author: David Schlangen

Found 83 papers, 19 papers with code

New or Old? Exploring How Pre-Trained Language Models Represent Discourse Entities

1 code implementation COLING 2022 Sharid Loáiciga, Anne Beyer, David Schlangen

Recent research shows that pre-trained language models, built to generate text conditioned on some context, learn to encode syntactic knowledge to a certain degree.

Binary Classification Sentence

Reference and coreference in situated dialogue

no code implementations NAACL (ALVR) 2021 Sharid Loáiciga, Simon Dobnik, David Schlangen

We argue that there is still significant room for corpora that increase the complexity of both visual and linguistic domains and which capture different varieties of perceptual and conversational contexts.

Incremental Unit Networks for Multimodal, Fine-grained Information State Representation

no code implementations ACL (mmsr, IWCS) 2021 Casey Kennington, David Schlangen

We offer a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting.

Annotating anaphoric phenomena in situated dialogue

no code implementations ACL (mmsr, IWCS) 2021 Sharid Loáiciga, Simon Dobnik, David Schlangen

With this paper, we intend to start a discussion on the annotation of referential phenomena in situated dialogue.


Anaphoric Phenomena in Situated dialog: A First Round of Annotations

no code implementations COLING (CRAC) 2022 Sharid Loáiciga, Simon Dobnik, David Schlangen

We present a first release of 500 documents from the multimodal corpus Tell-me-more (Ilinykh et al., 2019) annotated with coreference information according to the ARRAU guidelines (Poesio et al., 2021).

Sharing the Cost of Success: A Game for Evaluating and Learning Collaborative Multi-Agent Instruction Giving and Following Policies

1 code implementation26 Mar 2024 Philipp Sadler, Sherzod Hakimov, David Schlangen

In collaborative goal-oriented settings, the participants are not only interested in achieving a successful outcome, but do also implicitly negotiate the effort they put into the interaction (by adapting to each other).

When Only Time Will Tell: Interpreting How Transformers Process Local Ambiguities Through the Lens of Restart-Incrementality

1 code implementation20 Feb 2024 Brielen Madureira, Patrick Kahardipraja, David Schlangen

Incremental models that process sentences one token at a time will sometimes encounter points where more than one interpretation is possible.

Dependency Parsing

Learning Communication Policies for Different Follower Behaviors in a Collaborative Reference Game

no code implementations7 Feb 2024 Philipp Sadler, Sherzod Hakimov, David Schlangen

Albrecht and Stone (2018) state that modeling of changing behaviors remains an open problem "due to the essentially unconstrained nature of what other agents may do".

Taking Action Towards Graceful Interaction: The Effects of Performing Actions on Modelling Policies for Instruction Clarification Requests

1 code implementation30 Jan 2024 Brielen Madureira, David Schlangen

Clarification requests are a mechanism to help solve communication problems, e. g. due to ambiguity or underspecification, in instruction-following interactions.

Instruction Following

On General Language Understanding

no code implementations27 Oct 2023 David Schlangen

Natural Language Processing prides itself to be an empirically-minded, if not outright empiricist field, and yet lately it seems to get itself into essentialist debates on issues of meaning and measurement ("Do Large Language Models Understand Language, And If So, How Much?").

Benchmarking Ethics

Neural Conversation Models and How to Rein Them in: A Survey of Failures and Fixes

no code implementations11 Aug 2023 Fabian Galetzka, Anne Beyer, David Schlangen

In this survey, we interpret Grice's maxims of cooperative conversation from the perspective of this specific research area and systematize the literature under the aspect of what makes a contribution appropriate: A neural conversation model has to be fluent, informative, consistent, coherent, and follow social norms.

"Are you telling me to put glasses on the dog?'' Content-Grounded Annotation of Instruction Clarification Requests in the CoDraw Dataset

no code implementations4 Jun 2023 Brielen Madureira, David Schlangen

Instruction Clarification Requests are a mechanism to solve communication problems, which is very functional in instruction-following interactions.

Instruction Following

Pento-DIARef: A Diagnostic Dataset for Learning the Incremental Algorithm for Referring Expression Generation from Examples

1 code implementation24 May 2023 Philipp Sadler, David Schlangen

NLP tasks are typically defined extensionally through datasets containing example instantiations (e. g., pairs of image i and text t), but motivated intensionally through capabilities invoked in verbal descriptions of the task (e. g., "t is a description of i, for which the content of i needs to be recognised and understood").

Referring Expression Referring expression generation +1

Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks

1 code implementation23 May 2023 Sherzod Hakimov, David Schlangen

Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information.

Few-Shot Learning Language Modelling

Clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents

1 code implementation22 May 2023 Kranti Chalamalasetti, Jana Götze, Sherzod Hakimov, Brielen Madureira, Philipp Sadler, David Schlangen

Recent work has proposed a methodology for the systematic evaluation of "Situated Language Understanding Agents"-agents that operate in rich linguistic and non-linguistic contexts-through testing them in carefully constructed interactive settings.

Yes, this Way! Learning to Ground Referring Expressions into Actions with Intra-episodic Feedback from Supportive Teachers

1 code implementation22 May 2023 Philipp Sadler, Sherzod Hakimov, David Schlangen

The ability to pick up on language signals in an ongoing interaction is crucial for future machine learning models to collaborate and interact with humans naturally.

Referring Expression

Dialogue Games for Benchmarking Language Understanding: Motivation, Taxonomy, Strategy

no code implementations14 Apr 2023 David Schlangen

I argue that such tests need to be complemented with tests of language use embedded in a practice, to arrive at a more comprehensive evaluation of "artificial language understanding".


Instruction Clarification Requests in Multimodal Collaborative Dialogue Games: Tasks, and an Analysis of the CoDraw Dataset

1 code implementation28 Feb 2023 Brielen Madureira, David Schlangen

In visual instruction-following dialogue games, players can engage in repair mechanisms in face of an ambiguous or underspecified instruction that cannot be fully mapped to actions in the world.

visual instruction following

What A Situated Language-Using Agent Must be Able to Do: A Top-Down Analysis

no code implementations16 Feb 2023 David Schlangen

Even in our increasingly text-intensive times, the primary site of language use is situated, co-present interaction.

Incremental Learning Language Modelling

Norm Participation Grounds Language

no code implementations CLASP 2022 David Schlangen

The striking recent advances in eliciting seemingly meaningful language behaviour from language-only machine learning models have only made more apparent, through the surfacing of clear limitations, the need to go beyond the language-only mode and to ground these models "in the world".

Can Visual Dialogue Models Do Scorekeeping? Exploring How Dialogue Representations Incrementally Encode Shared Knowledge

1 code implementation ACL 2022 Brielen Madureira, David Schlangen

Our conclusion is that the ability to make the distinction between shared and privately known statements along the dialogue is moderately present in the analysed models, but not always incrementally consistent, which may partially be due to the limited need for grounding interactions in the original task.

The slurk Interaction Server Framework: Better Data for Better Dialog Models

no code implementations LREC 2022 Jana Götze, Maike Paetzel-Prüsmann, Wencke Liermann, Tim Diekmann, David Schlangen

This paper presents the slurk software, a lightweight interaction server for setting up dialog data collections and running experiments.

Space Efficient Context Encoding for Non-Task-Oriented Dialogue Generation with Graph Attention Transformer

1 code implementation ACL 2021 Fabian Galetzka, Jewgeni Rose, David Schlangen, Jens Lehmann

To improve the coherence and knowledge retrieval capabilities of non-task-oriented dialogue systems, recent Transformer-based models aim to integrate fixed background context.

Dialogue Generation Graph Attention +3

Is Incoherence Surprising? Targeted Evaluation of Coherence Prediction from Language Models

1 code implementation NAACL 2021 Anne Beyer, Sharid Loáiciga, David Schlangen

Coherent discourse is distinguished from a mere collection of utterances by the satisfaction of a diverse set of constraints, for example choice of expression, logical relation between denoted events, and implicit compatibility with world-knowledge.

Coherence Evaluation Language Modelling +2

Targeting the Benchmark: On Methodology in Current Natural Language Processing Research

no code implementations ACL 2021 David Schlangen

It has become a common pattern in our field: One group introduces a language task, exemplified by a dataset, which they argue is challenging enough to serve as a benchmark.

A Corpus of Controlled Opinionated and Knowledgeable Movie Discussions for Training Neural Conversation Models

1 code implementation LREC 2020 Fabian Galetzka, Chukwuemeka U. Eneh, David Schlangen

Fully data driven Chatbots for non-goal oriented dialogues are known to suffer from inconsistent behaviour across their turns, stemming from a general difficulty in controlling parameters like their assumed background personality and knowledge of facts.

Can Neural Image Captioning be Controlled via Forced Attention?

no code implementations WS 2019 Philipp Sadler, Tatjana Scheffler, David Schlangen

Learned dynamic weighting of the conditioning signal (attention) has been shown to improve neural language generation in a variety of settings.

Image Captioning Text Generation

Tell Me More: A Dataset of Visual Scene Description Sequences

no code implementations WS 2019 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

We present a dataset consisting of what we call image description sequences, which are multi-sentence descriptions of the contents of an image.


From Explainability to Explanation: Using a Dialogue Setting to Elicit Annotations with Justifications

no code implementations WS 2019 Nazia Attari, Martin Heckmann, David Schlangen

Despite recent attempts in the field of explainable AI to go beyond black box prediction models, typically already the training data for supervised machine learning is collected in a manner that treats the annotator as a {``}black box{''}, the internal workings of which remains unobserved.

Grounded Agreement Games: Emphasizing Conversational Grounding in Visual Dialogue Settings

no code implementations29 Aug 2019 David Schlangen

Where early work on dialogue in Computational Linguistics put much emphasis on dialogue structure and its relation to the mental states of the dialogue participants (e. g., Allen 1979, Grosz & Sidner 1986), current work mostly reduces dialogue to the task of producing at any one time a next utterance; e. g. in neural chatbot or Visual Dialogue settings.

Chatbot Visual Dialog

Language Tasks and Language Games: On Methodology in Current Natural Language Processing Research

no code implementations28 Aug 2019 David Schlangen

"This paper introduces a new task and a new dataset", "we improve the state of the art in X by Y" -- it is rare to find a current natural language processing paper (or AI paper more generally) that does not contain such statements.

MeetUp! A Corpus of Joint Activity Dialogues in a Visual Environment

no code implementations11 Jul 2019 Nikolai Ilinykh, Sina Zarrieß, David Schlangen

Building computer systems that can converse about their visual environment is one of the oldest concerns of research in Artificial Intelligence and Computational Linguistics (see, for example, Winograd's 1972 SHRDLU system).

Natural Language Semantics With Pictures: Some Language \& Vision Datasets and Potential Uses for Computational Semantics

no code implementations WS 2019 David Schlangen

Propelling, and propelled by, the {``}deep learning revolution{''}, recent years have seen the introduction of ever larger corpora of images annotated with natural language expressions.


Natural Language Semantics With Pictures: Some Language & Vision Datasets and Potential Uses for Computational Semantics

no code implementations15 Apr 2019 David Schlangen

Propelling, and propelled by, the "deep learning revolution", recent years have seen the introduction of ever larger corpora of images annotated with natural language expressions.


The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description

no code implementations WS 2018 Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen

Image captioning models are typically trained on data that is collected from people who are asked to describe an image, without being given any further task context.

Image Captioning Text Generation

Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieval Task

no code implementations IJCNLP 2017 Ting Han, David Schlangen

While language conveys meaning largely symbolically, actual communication acts typically contain iconic elements as well: People gesture while they speak, or may even draw sketches while explaining something.

Image Retrieval Retrieval

Deriving continous grounded meaning representations from referentially structured multimodal contexts

no code implementations EMNLP 2017 Sina Zarrie{\ss}, David Schlangen

Corpora of referring expressions paired with their visual referents are a good source for learning word meanings directly grounded in visual representations.

Attribute Word Embeddings

Beyond On-hold Messages: Conversational Time-buying in Task-oriented Dialogue

no code implementations WS 2017 Soledad L{\'o}pez Gambino, Sina Zarrie{\ss}, David Schlangen

A common convention in graphical user interfaces is to indicate a {``}wait state{''}, for example while a program is preparing a response, through a changed cursor state or a progress bar.

Obtaining referential word meanings from visual and distributional information: Experiments on object naming

no code implementations ACL 2017 Sina Zarrie{\ss}, David Schlangen

We present a model that learns individual predictors for object names that link visual and distributional aspects of word meaning during training.

Object Object Recognition +4

Joint, Incremental Disfluency Detection and Utterance Segmentation from Speech

no code implementations EACL 2017 Julian Hough, David Schlangen

We present the joint task of incremental disfluency detection and utterance segmentation and a simple deep learning system which performs it on transcripts and ASR results.

Speech Recognition

Grounding Language by Continuous Observation of Instruction Following

no code implementations EACL 2017 Ting Han, David Schlangen

Grounded semantics is typically learnt from utterance-level meaning representations (e. g., successful database retrievals, denoted objects in images, moves in a game).

Instruction Following

DUEL: A Multi-lingual Multimodal Dialogue Corpus for Disfluency, Exclamations and Laughter

no code implementations LREC 2016 Julian Hough, Ye Tian, Laura de Ruiter, Simon Betz, Spyros Kousidis, David Schlangen, Jonathan Ginzburg

We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese.

Cannot find the paper you are looking for? You can Submit a new open access paper.