Search Results for author: Yinfeng Yu

Found 4 papers, 1 papers with code

MFHCA: Enhancing Speech Emotion Recognition Via Multi-Spatial Fusion and Hierarchical Cooperative Attention

no code implementations21 Apr 2024 Xinxin Jiao, Liejun Wang, Yinfeng Yu

This paper introduces MFHCA, a novel method for Speech Emotion Recognition using Multi-Spatial Fusion and Hierarchical Cooperative Attention on spectrograms and raw audio.

Pay Self-Attention to Audio-Visual Navigation

no code implementations4 Oct 2022 Yinfeng Yu, Lele Cao, Fuchun Sun, Xiaohong Liu, Liejun Wang

Audio-visual embodied navigation, as a hot research topic, aims training a robot to reach an audio target using egocentric visual (from the sensors mounted on the robot) and audio (emitted from the target) input.

Visual Navigation

Sound Adversarial Audio-Visual Navigation

1 code implementation ICLR 2022 Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen, Yikai Wang, Xiaohong Liu

In this work, we design an acoustically complex environment in which, besides the target sound, there exists a sound attacker playing a zero-sum game with the agent.

Navigate Visual Navigation

Cannot find the paper you are looking for? You can Submit a new open access paper.