Search Results for author: Oliver Lemon

Found 66 papers, 8 papers with code

A Visually-Aware Conversational Robot Receptionist

no code implementations SIGDIAL (ACL) 2022 Nancie Gunson, Daniel Hernandez Garcia, Weronika Sieińska, Angus Addlesee, Christian Dondrup, Oliver Lemon, Jose L. Part, Yanchao Yu

Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities.

Question Answering

Conversational Agents for Intelligent Buildings

no code implementations SIGDIAL (ACL) 2020 Weronika Sieińska, Christian Dondrup, Nancie Gunson, Oliver Lemon

We will demonstrate a deployed conversational AI system that acts as a host of a smart-building on a university campus.

The Spoon Is in the Sink: Assisting Visually Impaired People in the Kitchen

1 code implementation ReInAct 2021 Katie Baker, Amit Parekh, Adrien Fabre, Angus Addlesee, Ruben Kruiper, Oliver Lemon

Questions about the spatial relations between these objects are particularly helpful to visually impaired people, and our system output more usable answers than other state of the art end-to-end VQA systems.

Question Answering Visual Question Answering

Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers

1 code implementation21 Apr 2024 Georgios Pantazopoulos, Alessandro Suglia, Oliver Lemon, Arash Eshghi

In this paper, we use \textit{diagnostic classifiers} to measure the extent to which the visual prompt produced by the resampler encodes spatial information.

NLP Verification: Towards a General Methodology for Certifying Robustness

no code implementations15 Mar 2024 Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Omri Isac, Matthew L. Daggitt, Guy Katz, Verena Rieser, Oliver Lemon

We propose a number of practical NLP methods that can help to identify the effects of the embedding gap; and in particular we propose the metric of falsifiability of semantic subpspaces as another fundamental metric to be reported as part of the NLP verification pipeline.

Visually Grounded Language Learning: a review of language games, datasets, tasks, and models

no code implementations5 Dec 2023 Alessandro Suglia, Ioannis Konstas, Oliver Lemon

Our analysis of the literature provides evidence that future work should be focusing on interactive games where communication in Natural Language is important to resolve ambiguities about object referents and action plans and that physical embodiment is essential to understand the semantics of situations and events.

Grounded language learning Language Modelling +1

Multitask Multimodal Prompted Training for Interactive Embodied Task Completion

no code implementations7 Nov 2023 Georgios Pantazopoulos, Malvina Nikandrou, Amit Parekh, Bhathiya Hemanthage, Arash Eshghi, Ioannis Konstas, Verena Rieser, Oliver Lemon, Alessandro Suglia

Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation.

Text Generation

Detecting Agreement in Multi-party Conversational AI

no code implementations6 Nov 2023 Laura Schauer, Jason Sweeney, Charlie Lyttle, Zein Said, Aron Szeles, Cale Clark, Katie McAskill, Xander Wickham, Tom Byars, Daniel Hernández Garcia, Nancie Gunson, Angus Addlesee, Oliver Lemon

Today, conversational systems are expected to handle conversations in multi-party settings, especially within Socially Assistive Robots (SARs).

Speaker Recognition

FurChat: An Embodied Conversational Agent using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions

no code implementations29 Aug 2023 Neeraj Cherakara, Finny Varghese, Sheena Shabana, Nivan Nelson, Abhiram Karukayil, Rohith Kulothungan, Mohammed Afil Farhan, Birthe Nesset, Meriam Moujahid, Tanvi Dinkar, Verena Rieser, Oliver Lemon

We demonstrate an embodied conversational agent that can function as a receptionist and generate a mixture of open and closed-domain dialogue along with facial expressions, by using a large language model (LLM) to develop an engaging conversation.

Language Modelling Large Language Model +1

Encoding Syntactic Constituency Paths for Frame-Semantic Parsing with Graph Convolutional Networks

no code implementations26 Nov 2020 Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon

We study the problem of integrating syntactic information from constituency trees into a neural model in Frame-semantic parsing sub-tasks, namely Target Identification (TI), FrameIdentification (FI), and Semantic Role Labeling (SRL).

Semantic Parsing Semantic Role Labeling +1

Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games

no code implementations COLING 2020 Alessandro Suglia, Antonio Vergari, Ioannis Konstas, Yonatan Bisk, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon

However, as shown by Suglia et al. (2020), existing models fail to learn truly multi-modal representations, relying instead on gold category labels for objects in the scene both at training and inference time.

Object

CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning

no code implementations ACL 2020 Alessandro Suglia, Ioannis Konstas, Andrea Vanzo, Emanuele Bastianelli, Desmond Elliott, Stella Frank, Oliver Lemon

To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation.

Attribute Grounded language learning

Neural Response Ranking for Social Conversation: A Data-Efficient Approach

1 code implementation WS 2018 Igor Shalyminov, Ondřej Dušek, Oliver Lemon

Using a dataset of real conversations collected in the 2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good' system responses to user utterances, i. e. responses which are likely to lead to long and engaging conversations.

Multi-Task Learning for Domain-General Spoken Disfluency Detection in Dialogue Systems

no code implementations8 Oct 2018 Igor Shalyminov, Arash Eshghi, Oliver Lemon

To test the model's generalisation potential, we evaluate the same model on the bAbI+ dataset, without any additional training.

Multi-Task Learning

Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data.

Reinforcement Learning (RL)

Training an adaptive dialogue policy for interactive learning of visually grounded word meanings

no code implementations WS 2016 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor.

Semantic Parsing

Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena

1 code implementation22 Sep 2017 Igor Shalyminov, Arash Eshghi, Oliver Lemon

Results show that the semantic accuracy of the MemN2N model drops drastically; and that although it is in principle able to learn to process the constructions in bAbI+, it needs an impractical amount of training data to do so.

Retrieval Sentence

VOILA: An Optimised Dialogue System for Interactively Learning Visually-Grounded Word Meanings (Demonstration System)

no code implementations WS 2017 Yanchao Yu, Arash Eshghi, Oliver Lemon

We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user.

Active Learning

Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction

no code implementations WS 2017 Jekaterina Novikova, Christian Dondrup, Ioannis Papaioannou, Oliver Lemon

We find that happiness in the user's recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent.

Bootstrapping incremental dialogue systems: using linguistic knowledge to learn from minimal data

no code implementations1 Dec 2016 Dimitrios Kalatzis, Arash Eshghi, Oliver Lemon

We present a method for inducing new dialogue systems from very small amounts of unannotated dialogue data, showing how word-level exploration using Reinforcement Learning (RL), combined with an incremental and semantic grammar - Dynamic Syntax (DS) - allows systems to discover, generate, and understand many new dialogue variants.

Dialogue Management Management +2

Crowd-sourcing NLG Data: Pictures Elicit Better Data

no code implementations1 Aug 2016 Jekaterina Novikova, Oliver Lemon, Verena Rieser

Recent advances in corpus-based Natural Language Generation (NLG) hold the promise of being easily portable across domains, but require costly training data, consisting of meaning representations (MRs) paired with Natural Language (NL) utterances.

Text Generation

Natural Language Generation as Planning under Uncertainty Using Reinforcement Learning

no code implementations15 Jun 2016 Verena Rieser, Oliver Lemon

We present and evaluate a new model for Natural Language Generation (NLG) in Spoken Dialogue Systems, based on statistical planning, given noisy feedback from the current generation context (e. g. a user and a surface realiser).

reinforcement-learning Reinforcement Learning (RL) +2

Strategic Dialogue Management via Deep Reinforcement Learning

1 code implementation25 Nov 2015 Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon

This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting.

Dialogue Management Management +2

On the Linear Belief Compression of POMDPs: A re-examination of current methods

no code implementations5 Aug 2015 Zhuoran Wang, Paul A. Crook, Wenshuo Tang, Oliver Lemon

Belief compression improves the tractability of large-scale partially observable Markov decision processes (POMDPs) by finding projections from high-dimensional belief space onto low-dimensional approximations, where solving to obtain action selection policies requires fewer computations.

Cannot find the paper you are looking for? You can Submit a new open access paper.