A Unified Quantitative Model of Vision and Audition

23 Jun 2014  ·  Peilei Liu, Ting Wang ·

We have put forwards a unified quantitative framework of vision and audition, based on existing data and theories. According to this model, the retina is a feedforward network self-adaptive to inputs in a specific period. After fully grown, cells become specialized detectors based on statistics of stimulus history. This model has provided explanations for perception mechanisms of colour, shape, depth and motion. Moreover, based on this ground we have put forwards a bold conjecture that single ear can detect sound direction. This is complementary to existing theories and has provided better explanations for sound localization.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here