no code implementations • 3 Jan 2025 • Heqiang Wang, Xiaoxiong Zhong, Kang Liu, Fangming Liu, Weizhe Zhang
With the continuous improvement in the computational capabilities of edge devices such as intelligent sensors in the Industrial Internet of Things, these sensors are no longer limited to mere data collection but are increasingly capable of performing complex computational tasks.
no code implementations • 18 Dec 2024 • Yifan Lu, Yigeng Zhou, Jing Li, Yequan Wang, Xuebo Liu, Daojing He, Fangming Liu, Min Zhang
Multi-hop question answering (MHQA) poses a significant challenge for large language models (LLMs) due to the extensive knowledge demands involved.
no code implementations • 2 Dec 2024 • Xiang Li, Yucheng Zhou, Laiping Zhao, Jing Li, Fangming Liu
Moreover, we propose a detection framework tailored to this problem, which employs context augmentation modeling and multi-round iterative training.
1 code implementation • 16 Oct 2024 • Yaxi Lu, Shenzhi Yang, Cheng Qian, Guirong Chen, Qinyu Luo, Yesai Wu, Huadong Wang, Xin Cong, Zhong Zhang, Yankai Lin, Weiwen Liu, Yasheng Wang, Zhiyuan Liu, Fangming Liu, Maosong Sun
The labeled data is used to train a reward model that simulates human judgment and serves as an automatic evaluator of the proactiveness of LLM agents.
1 code implementation • 24 Sep 2024 • Zhenyan Lu, Xiang Li, Dongqi Cai, Rongjie Yi, Fangming Liu, Xiwen Zhang, Nicholas D. Lane, Mengwei Xu
Small language models (SLMs), despite their widespread adoption in modern smart devices, have received significantly less academic attention compared to their large language model (LLM) counterparts, which are predominantly deployed in data centers and cloud environments.
no code implementations • 20 Aug 2024 • Junming Wang, Xiuxian Guan, Zekai Sun, Tianxiang Shen, Dong Huang, Fangming Liu, Heming Cui
These blocks efficiently extract semantic and geometric features in 3D environments with linear complexity, ensuring that the network can learn long-distance dependencies to improve prediction accuracy.
no code implementations • 7 May 2024 • Guanqiao Qu, Zheng Lin, Fangming Liu, Xianhao Chen, Kaibin Huang
To this end, we formulate a parameter-sharing model placement problem to maximize the cache hit ratio in multi-edge wireless networks by balancing the fundamental tradeoff between storage efficiency and service latency.
1 code implementation • 16 Dec 2023 • Aodong Chen, Fei Xu, Li Han, Yuan Dong, Li Chen, Zhi Zhou, Fangming Liu
GPUs have become the \emph{defacto} hardware devices for accelerating Deep Neural Network (DNN) inference workloads.
no code implementations • 6 Jul 2021 • Zimu Zheng, Qiong Chen, Chuang Hu, Dan Wang, Fangming Liu
We then show that task allocation with task importance for MTL (TATIM) is a variant of the NP-complete Knapsack problem, where the complicated computation to solve this problem needs to be conducted repeatedly under varying contexts.