Search Results for author: Masood Dehghan

Found 5 papers, 4 papers with code

A Geometric Perspective on Visual Imitation Learning

no code implementations5 Mar 2020 Jun Jin, Laura Petrich, Masood Dehghan, Martin Jagersand

We consider the problem of visual imitation learning without human supervision (e. g. kinesthetic teaching or teleoperation), nor access to an interactive reinforcement learning (RL) training environment.

Imitation Learning

Understanding Contexts Inside Robot and Human Manipulation Tasks through a Vision-Language Model and Ontology System in a Video Stream

1 code implementation2 Mar 2020 Chen Jiang, Masood Dehghan, Martin Jagersand

In this paper, to model the intended concepts of manipulation, we present a vision dataset under a strictly constrained knowledge domain for both robot and human manipulations, where manipulation concepts and relations are stored by an ontology system in a taxonomic manner.

Language Modelling

Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach

1 code implementation29 Sep 2018 Jun Jin, Laura Petrich, Masood Dehghan, Zichen Zhang, Martin Jagersand

Our proposed method can directly learn from raw videos, which removes the need for hand-engineered task specification.


Real-Time Salient Closed Boundary Tracking via Line Segments Perceptual Grouping

2 code implementations30 Apr 2017 Xuebin Qin, Shida He, Camilo Perez Quintero, Abhineet Singh, Masood Dehghan, Martin Jagersand

The tracking scheme is coherently integrated into a perceptual grouping framework in which the visual tracking problem is tackled by identifying a subset of these line segments and connecting them sequentially to form a closed boundary with the largest saliency and a certain similarity to the previous one.

Line Detection Visual Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.