Large Model Empowered Streaming Speech Semantic Communications

10 Jan 2025  ·  Zhenzi Weng, Zhijin Qin, Geoffrey Ye Li ·

In this paper, we introduce a large model-empowered streaming semantic communication system for speech transmission across various languages, named LSSC-ST. Specifically, we devise an edge-device collaborative semantic communication architecture by offloading the intricate semantic extraction and channel coding modules to edge servers, thereby reducing the computational burden on local devices. To support multilingual speech transmission, pre-trained large speech models are utilized to learn unified semantic features from speech in different languages, breaking the constraint of a single input language and enhancing the practicality of the LSSC-ST. Moreover, the input speech is sequentially streamed into the developed system as short speech segments, which enables low transmission latency without degrading the quality of the produced speech. A novel dynamic speech segmentation algorithm is proposed to further reduce the transmission latency by adaptively adjusting the duration of speech segments. According to simulation results, the LSSC-ST provides more accurate speech transmission and achieves a streaming manner with lower latency compared to the existing non-streaming semantic communication systems.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here