Search Results for author: Mark Neerincx

Found 7 papers, 3 papers with code

Modelling prospective memory and resilient situated communications via Wizard of Oz

no code implementations9 Nov 2023 Yanzhe Li, Frank Broz, Mark Neerincx

This abstract presents a scenario for human-robot action in a home setting involving an older adult and a robot.

Designing for Meaningful Human Control in Military Human-Machine Teams

no code implementations12 May 2023 Jurriaan van Diggelen, Karel van den Bosch, Mark Neerincx, Marc Steen

We propose methods for analysis, design, and evaluation of Meaningful Human Control (MHC) for defense technologies from the perspective of military human-machine teaming (HMT).

A Machine with Short-Term, Episodic, and Semantic Memory Systems

1 code implementation5 Dec 2022 Taewoon Kim, Michael Cochez, Vincent François-Lavet, Mark Neerincx, Piek Vossen

Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems, each of which is modeled with a knowledge graph.

Q-Learning RoomEnv-v1

A Machine With Human-Like Memory Systems

1 code implementation4 Apr 2022 Taewoon Kim, Michael Cochez, Vincent Francois-Lavet, Mark Neerincx, Piek Vossen

Inspired by the cognitive science theory, we explicitly model an agent with both semantic and episodic memory systems, and show that it is better than having just one of the two memory systems.

RoomEnv-v0

A Blast From the Past: Personalizing Predictions of Video-Induced Emotions using Personal Memories as Context

no code implementations27 Aug 2020 Bernd Dudzik, Joost Broekens, Mark Neerincx, Hayley Hung

A key challenge in the accurate prediction of viewers' emotional responses to video stimuli in real-world applications is accounting for person- and situation-specific variation.

Contrastive Explanations with Local Foil Trees

2 code implementations19 Jun 2018 Jasper van der Waa, Marcel Robeer, Jurriaan van Diggelen, Matthieu Brinkhuis, Mark Neerincx

Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks.

Explainable Artificial Intelligence (XAI) General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.