Search Results for author: Norbert Braunschweiler

Found 5 papers, 0 papers with code

Combining Structured and Unstructured Knowledge in an Interactive Search Dialogue System

no code implementations SIGDIAL (ACL) 2022 Svetlana Stoyanchev, Suraj Pandey, Simon Keizer, Norbert Braunschweiler, Rama Sanand Doddipatla

Based on objective metrics and subjective user ratings, we demonstrate the feasibility of using an unsupervised low latency approach to extend a schema-driven search dialogue system to handle unconstrained user preferences.

Semantic Similarity Semantic Textual Similarity

Evaluating Large Language Models for Document-grounded Response Generation in Information-Seeking Dialogues

no code implementations21 Sep 2023 Norbert Braunschweiler, Rama Doddipatla, Simon Keizer, Svetlana Stoyanchev

Observing that document-grounded response generation via LLMs cannot be adequately assessed by automatic evaluation metrics as they are significantly more verbose, we perform a human evaluation where annotators rate the output of the shared task winning system, the two Chat-GPT variants outputs, and human responses.

Response Generation

Adversarial learning of neural user simulators for dialogue policy optimisation

no code implementations1 Jun 2023 Simon Keizer, Caroline Dockes, Norbert Braunschweiler, Svetlana Stoyanchev, Rama Doddipatla

Reinforcement learning based dialogue policies are typically trained in interaction with a user simulator.

Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling

no code implementations14 Apr 2022 Simon Keizer, Norbert Braunschweiler, Svetlana Stoyanchev, Rama Doddipatla

A major bottleneck for building statistical spoken dialogue systems for new domains and applications is the need for large amounts of training data.

Dialogue Management Management +2

A study on cross-corpus speech emotion recognition and data augmentation

no code implementations10 Jan 2022 Norbert Braunschweiler, Rama Doddipatla, Simon Keizer, Svetlana Stoyanchev

Models trained on mixed corpora can be more stable in mismatched contexts, and the performance reductions range from 1 to 8% when compared with single corpus models in matched conditions.

Cross-corpus Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.