Search Results for author: Alex Lascarides

Found 16 papers, 5 papers with code

Symbol Grounding and Task Learning from Imperfect Corrections

no code implementations ACL (splurobonlp) 2021 Mattias Appelgren, Alex Lascarides

This paper describes a method for learning from a teacher’s potentially unreliable corrective feedback in an interactive task learning setting.

valid

Interactive Symbol Grounding with Complex Referential Expressions

1 code implementation NAACL 2022 Rimvydas Rubavicius, Alex Lascarides

We present a procedure for learning to ground symbols from a sequence of stimuli consisting of an arbitrarily complex noun phrase (e. g. “all but one green square above both red circles.”) and its designation in the visual scene.

Few-Shot Learning Negation

Dynamic Planning with a LLM

1 code implementation11 Aug 2023 Gautier Dagan, Frank Keller, Alex Lascarides

While Large Language Models (LLMs) can solve many NLP tasks in zero-shot settings, applications involving embodied agents remain problematic.

Learning Manner of Execution from Partial Corrections

no code implementations7 Feb 2023 Mattias Appelgren, Alex Lascarides

Some actions must be executed in different ways depending on the context.

Learning the Effects of Physical Actions in a Multi-modal Environment

1 code implementation27 Jan 2023 Gautier Dagan, Frank Keller, Alex Lascarides

However, predicting the effects of an action before it is executed is crucial in planning, where coherent sequences of actions are often needed to achieve a goal.

Physical Commonsense Reasoning

Learning Factored Markov Decision Processes with Unawareness

no code implementations27 Feb 2019 Craig Innes, Alex Lascarides

Methods for learning and planning in sequential decision problems often assume the learner is aware of all possible states and actions in advance.

Interpretable Latent Spaces for Learning from Demonstration

no code implementations17 Jul 2018 Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy

Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world.

Reasoning about Unforeseen Possibilities During Policy Learning

no code implementations10 Jan 2018 Craig Innes, Alex Lascarides, Stefano V. Albrecht, Subramanian Ramamoorthy, Benjamin Rosman

Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised---its possible states and actions and their causal structure---is known in advance and does not change during learning.

Grounding Symbols in Multi-Modal Instructions

no code implementations WS 2017 Yordan Hristov, Svetlin Penkov, Alex Lascarides, Subramanian Ramamoorthy

As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability---for instance, learning to ground symbols in the physical world.

Cannot find the paper you are looking for? You can Submit a new open access paper.