1 code implementation • 1 May 2024 • Yu Cui, Feng Liu, Pengbo Wang, Bohao Wang, Heng Tang, Yi Wan, Jun Wang, Jiawei Chen
Owing to their powerful semantic reasoning capabilities, Large Language Models (LLMs) have been effectively utilized as recommenders, achieving impressive performance.
no code implementations • 13 Dec 2022 • Lizhao You, Zhirong Tang, Pengbo Wang, Zhaorui Wang, Haipeng Dai, Liqun Fu
Trace-driven simulation results show that the symbol demodulation algorithm outperforms the state-of-the-art MPR decoder by 5. 3$\times$ in terms of physical-layer throughput, and the soft decoder is more robust to unavoidable adverse phase misalignment and estimation error in practice.
no code implementations • 29 Jul 2022 • Hongjiu Yu, Qiancheng Sun, Jin Hu, Xingyuan Xue, Jixiang Luo, Dailan He, Yilong Li, Pengbo Wang, Yuanyuan Wang, Yaxu Dai, Yan Wang, Hongwei Qin
On CPU, the latency of our implementation is comparable with JPEG XL.