no code implementations • 22 May 2023 • J. Li, Z. Duan, S. Li, X. Yu, G. Yang
In this paper, an Enhanced Self-Attention (ESA) mechanism has been put forward for robust feature extraction. The proposed ESA is integrated with the recursive gated convolution and self-attention mechanism. In particular, the former is used to capture multi-order feature interaction and the latter is for global feature extraction. In addition, the location of interest that is suitable for inserting the ESA is also worth being explored. In this paper, the ESA is embedded into the encoder layer of the Transformer network for automatic speech recognition (ASR) tasks, and this newly proposed model is named GNCformer.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 10 Jul 2021 • Q. Huang, C. Wu, S. Hou, H. Sun, K. Yao, J. Law, M. Yang, A. L. R. Vellaisamy, X. Yu, H. Y. Chan, L. Lao, Y. Sun, W. J. Li
From the data of 17 volunteers, the auricular region-specific AESR changes after cycling exercise were observed in 98% of the tests and were validated via machine learning techniques.