Search Results for author: Jakob Suchan

Found 15 papers, 0 papers with code

Assessing Drivers' Situation Awareness in Semi-Autonomous Vehicles: ASP based Characterisations of Driving Dynamics for Modelling Scene Interpretation and Projection

no code implementations30 Aug 2023 Jakob Suchan, Jan-Patrick Osterloh

Furthermore we present the ASP approach for interpretation and projection of the driver's situation awareness and its integration within the overall system in the context of a real-world use-case in simulated as well as in real driving.

Autonomous Driving

Commonsense Visual Sensemaking for Autonomous Driving: On Generalised Neurosymbolic Online Abduction Integrating Vision and Semantics

no code implementations28 Dec 2020 Jakob Suchan, Mehul Bhatt, Srikrishna Varadarajan

The method integrates state of the art in visual computing, and is developed as a modular framework that is generally usable within hybrid architectures for realtime perception and control.

Autonomous Driving Question Answering

Towards a Human-Centred Cognitive Model of Visuospatial Complexity in Everyday Driving

no code implementations29 May 2020 Vasiliki Kondyli, Mehul Bhatt, Jakob Suchan

We develop a human-centred, cognitive model of visuospatial complexity in everyday, naturalistic driving conditions.

Benchmarking

Out of Sight But Not Out of Mind: An Answer Set Programming Based Online Abduction Framework for Visual Sensemaking in Autonomous Driving

no code implementations31 May 2019 Jakob Suchan, Mehul Bhatt, Srikrishna Varadarajan

We demonstrate the need and potential of systematically integrated vision and semantics} solutions for visual sensemaking (in the backdrop of autonomous driving).

Autonomous Driving Question Answering

Semantic Analysis of (Reflectional) Visual Symmetry: A Human-Centred Computational Model for Declarative Explainability

no code implementations31 May 2018 Jakob Suchan, Mehul Bhatt, Srikrishna Vardarajan, Seyed Ali Amirshahi, Stella Yu

Key features include a human-centred representation, and a declarative, explainable interpretation model supporting deep semantic question-answering founded on an integration of methods in knowledge representation and deep learning based computer vision.

Question Answering

Answer Set Programming Modulo `Space-Time'

no code implementations17 May 2018 Carl Schultz, Mehul Bhatt, Jakob Suchan, Przemysław Wałęga

We present ASP Modulo `Space-Time', a declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components.

Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects

no code implementations3 Dec 2017 Jakob Suchan, Mehul Bhatt, Przemysław Wałęga, Carl Schultz

We present the formal framework, its general implementation as a (declarative) method in answer set programming, and an example application and evaluation based on two diverse video datasets: the MOTChallenge benchmark developed by the vision community, and a recently developed Movie Dataset.

Object Tracking

Deep Semantic Abstractions of Everyday Human Activities: On Commonsense Representations of Human Interactions

no code implementations10 Oct 2017 Jakob Suchan, Mehul Bhatt

We propose a deep semantic characterization of space and motion categorically from the viewpoint of grounding embodied human-object interactions.

Human-Object Interaction Detection Object +1

Commonsense Scene Semantics for Cognitive Robotics: Towards Grounding Embodied Visuo-Locomotive Interactions

no code implementations15 Sep 2017 Jakob Suchan, Mehul Bhatt

We present a commonsense, qualitative model for the semantic grounding of embodied visuo-spatial and locomotive interactions.

Deeply Semantic Inductive Spatio-Temporal Learning

no code implementations9 Aug 2016 Jakob Suchan, Mehul Bhatt, Carl Schultz

We present an inductive spatio-temporal learning framework rooted in inductive logic programming.

Inductive logic programming

Grounding Dynamic Spatial Relations for Embodied (Robot) Interaction

no code implementations26 Jul 2016 Michael Spranger, Jakob Suchan, Mehul Bhatt, Manfred Eppe

This paper presents a computational model of the processing of dynamic spatial relations occurring in an embodied robotic interaction setup.

Robust Natural Language Processing - Combining Reasoning, Cognitive Semantics and Construction Grammar for Spatial Language

no code implementations20 Jul 2016 Michael Spranger, Jakob Suchan, Mehul Bhatt

We present a system for generating and understanding of dynamic and static spatial relations in robotic interaction setups.

Talking about the Moving Image: A Declarative Model for Image Schema Based Embodied Perception Grounding and Language Generation

no code implementations13 Aug 2015 Jakob Suchan, Mehul Bhatt, Harshita Jhavar

We present a general theory and corresponding declarative model for the embodied grounding and natural language based analytical summarisation of dynamic visuo-spatial imagery.

Text Generation

Cognitive Interpretation of Everyday Activities: Toward Perceptual Narrative Based Visuo-Spatial Scene Interpretation

no code implementations22 Jun 2013 Mehul Bhatt, Jakob Suchan, Carl Schultz

We position a narrative-centred computational model for high-level knowledge representation and reasoning in the context of a range of assistive technologies concerned with "visuo-spatial perception and cognition" tasks.

Position Scene Understanding

ROTUNDE - A Smart Meeting Cinematography Initiative: Tools, Datasets, and Benchmarks for Cognitive Interpretation and Control

no code implementations5 Jun 2013 Mehul Bhatt, Jakob Suchan, Christian Freksa

ROTUNDE, an instance of smart meeting cinematography concept, aims to: (a) develop functionality-driven benchmarks with respect to the interpretation and control capabilities of human-cinematographers, real-time video editors, surveillance personnel, and typical human performance in everyday situations; (b) Develop general tools for the commonsense cognitive interpretation of dynamic scenes from the viewpoint of visuo-spatial cognition centred perceptual narrativisation.

Cannot find the paper you are looking for? You can Submit a new open access paper.