Search Results for author: Jae Hee Lee

Found 17 papers, 7 papers with code

Causal State Distillation for Explainable Reinforcement Learning

1 code implementation30 Dec 2023 Wenhao Lu, Xufeng Zhao, Thilo Fryen, Jae Hee Lee, Mengdi Li, Sven Magg, Stefan Wermter

This lack of transparency in RL models has been a long-standing problem, making it difficult for users to grasp the reasons behind an agent's behaviour.

reinforcement-learning Reinforcement Learning (RL)

Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models

no code implementations13 Dec 2023 Kyra Ahrens, Hans Hergen Lehmann, Jae Hee Lee, Stefan Wermter

We address the Continual Learning (CL) problem, wherein a model must learn a sequence of tasks from non-stationary distributions while preserving prior knowledge upon encountering new experiences.

Class Incremental Learning Incremental Learning

Visually Grounded Continual Language Learning with Selective Specialization

1 code implementation24 Oct 2023 Kyra Ahrens, Lennart Bengtson, Jae Hee Lee, Stefan Wermter

Selective specialization, i. e., a careful selection of model components to specialize in each task, is a strategy to provide control over this trade-off.

Continual Learning

From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks

no code implementations18 Oct 2023 Jae Hee Lee, Sergio Lanza, Stefan Wermter

In this paper, we review recent approaches for explaining concepts in neural networks.

Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic

1 code implementation23 Sep 2023 Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, Stefan Wermter

Recent advancements in large language models have showcased their remarkable generalizability across various domains.

Causal Inference

Internally Rewarded Reinforcement Learning

1 code implementation1 Feb 2023 Mengdi Li, Xufeng Zhao, Jae Hee Lee, Cornelius Weber, Stefan Wermter

We study a class of reinforcement learning problems where the reward signals for policy learning are generated by an internal reward model that is dependent on and jointly optimized with the policy.

reinforcement-learning Reinforcement Learning (RL)

Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations

1 code implementation8 Dec 2022 Björn Plüster, Jakob Ambsdorf, Lukas Braach, Jae Hee Lee, Stefan Wermter

Natural language explanations promise to offer intuitively understandable explanations of a neural network's decision process in complex vision-language tasks, as pursued in recent VL-NLE models.

Explanation Generation Visual Entailment +1

Learning Flexible Translation between Robot Actions and Language Descriptions

no code implementations15 Jul 2022 Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter

In this work, we propose the paired gated autoencoders (PGAE) for flexible translation between robot actions and language descriptions in a tabletop object manipulation scenario.

Language Modelling Multi-Task Learning +1

Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning

1 code implementation6 Jul 2022 Kyra Ahrens, Matthias Kerzel, Jae Hee Lee, Cornelius Weber, Stefan Wermter

Spatial reasoning poses a particular challenge for intelligent agents and is at the same time a prerequisite for their successful interaction and communication in the physical world.

Multi-Task Learning Question Answering +1

What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning

1 code implementation5 May 2022 Jae Hee Lee, Matthias Kerzel, Kyra Ahrens, Cornelius Weber, Stefan Wermter

Grounding relative directions is more difficult than grounding absolute directions because it not only requires a model to detect objects in the image and to identify spatial relation based on this information, but it also needs to recognize the orientation of objects and integrate this information into the reasoning process.

Multi-Task Learning Question Answering +1

Language Model-Based Paired Variational Autoencoders for Robotic Language Learning

no code implementations17 Jan 2022 Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter

Human infants learn language while interacting with their environment in which their caregivers may describe the objects and actions they perform.

Language Modelling

Generalization in Multimodal Language Learning from Simulation

no code implementations3 Aug 2021 Aaron Eisermann, Jae Hee Lee, Cornelius Weber, Stefan Wermter

Neural networks can be powerful function approximators, which are able to model high-dimensional feature distributions from a subset of examples drawn from the target distribution.

Multiagent Simple Temporal Problem: The Arc-Consistency Approach

no code implementations22 Nov 2017 Shufeng Kong, Jae Hee Lee, Sanjiang Li

The Simple Temporal Problem (STP) is a fundamental temporal reasoning problem and has recently been extended to the Multiagent Simple Temporal Problem (MaSTP).

Cannot find the paper you are looking for? You can Submit a new open access paper.