Complementary Visual Neuronal Systems Model for Collision Sensing

11 Jun 2020  ·  Qinbing Fu, Shigang Yue ·

Inspired by insects' visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs have specific selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To fill this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD-2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented in ground micro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here