Search Results for author: Junpei Zhong

Found 9 papers, 3 papers with code

Anti-aliasing Predictive Coding Network for Future Video Frame Prediction

1 code implementation13 Jan 2023 Chaofan Ling, Weihua Li, Junpei Zhong

Inspired by the predictive coding hypothesis and related works, the total model is updated through a combination of bottom-up and top-down information flows, which can enhance the interaction between different network levels.

Pyramidal Predictive Network: A Model for Visual-frame Prediction Based on Predictive Coding Theory

1 code implementation15 Aug 2022 Chaofan Ling, Junpei Zhong, Weihua Li

The update frequency of neural units on each of the layer decreases with the increasing of network levels, which results in neurons of higher-level can capture information in longer time dimensions.

Towards a self-organizing pre-symbolic neural model representing sensorimotor primitives

no code implementations20 Jun 2020 Junpei Zhong, Angelo Cangelosi, Stefan Wermter

During the learning process of observing sensorimotor primitives, i. e. observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units.

Encoding Longer-term Contextual Multi-modal Information in a Predictive Coding Model

no code implementations17 Apr 2018 Junpei Zhong, Tetsuya OGATA, Angelo Cangelosi

On the other hand, the incoming sensory information corrects such prediction of the events on the higher level by the novel or surprising signal.

AFA-PredNet: The action modulation within predictive coding

no code implementations11 Apr 2018 Junpei Zhong, Angelo Cangelosi, Xinzheng Zhang, Tetsuya OGATA

The predictive processing (PP) hypothesizes that the predictive inference of our sensorimotor system is encoded implicitly in the regularities between perception and action.

Causal Inference

Toward Abstraction from Multi-modal Data: Empirical Studies on Multiple Time-scale Recurrent Models

no code implementations7 Feb 2017 Junpei Zhong, Angelo Cangelosi, Tetsuya OGATA

This was done by conducting two studies based on a smaller data- set (two-dimension time sequences from non-linear functions) and a relatively large data-set (43-dimension time sequences from iCub manipulation tasks with multi-modal data).

Robot Manipulation Text Generation

Sensorimotor Input as a Language Generalisation Tool: A Neurorobotics Model for Generation and Generalisation of Noun-Verb Combinations with Sensorimotor Inputs

no code implementations11 May 2016 Junpei Zhong, Martin Peniak, Jun Tani, Tetsuya OGATA, Angelo Cangelosi

The paper presents a neurorobotics cognitive model to explain the understanding and generalisation of nouns and verbs combinations when a vocal command consisting of a verb-noun sentence is provided to a humanoid robot.

Language Acquisition Sentence

A Hierarchical Emotion Regulated Sensorimotor Model: Case Studies

no code implementations11 May 2016 Junpei Zhong, Rony Novianto, Mingjun Dai, Xinzheng Zhang, Angelo Cangelosi

Inspired by the hierarchical cognitive architecture and the perception-action model (PAM), we propose that the internal status acts as a kind of common-coding representation which affects, mediates and even regulates the sensorimotor behaviours.

Cannot find the paper you are looking for? You can Submit a new open access paper.