Search Results for author: Fangming Liu

Found 9 papers, 3 papers with code

Denoising and Adaptive Online Vertical Federated Learning for Sequential Multi-Sensor Data in Industrial Internet of Things

no code implementations3 Jan 2025 Heqiang Wang, Xiaoxiong Zhong, Kang Liu, Fangming Liu, Weizhe Zhang

With the continuous improvement in the computational capabilities of edge devices such as intelligent sensors in the Industrial Internet of Things, these sensors are no longer limited to mere data collection but are increasingly capable of performing complex computational tasks.

Denoising Vertical Federated Learning

Knowledge Editing with Dynamic Knowledge Graphs for Multi-Hop Question Answering

no code implementations18 Dec 2024 Yifan Lu, Yigeng Zhou, Jing Li, Yequan Wang, Xuebo Liu, Daojing He, Fangming Liu, Min Zhang

Multi-hop question answering (MHQA) poses a significant challenge for large language models (LLMs) due to the extensive knowledge demands involved.

graph construction knowledge editing +4

Impromptu Cybercrime Euphemism Detection

no code implementations2 Dec 2024 Xiang Li, Yucheng Zhou, Laiping Zhao, Jing Li, Fangming Liu

Moreover, we propose a detection framework tailored to this problem, which employs context augmentation modeling and multi-round iterative training.

Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance

1 code implementation16 Oct 2024 Yaxi Lu, Shenzhi Yang, Cheng Qian, Guirong Chen, Qinyu Luo, Yesai Wu, Huadong Wang, Xin Cong, Zhong Zhang, Yankai Lin, Weiwen Liu, Yasheng Wang, Zhiyuan Liu, Fangming Liu, Maosong Sun

The labeled data is used to train a reward model that simulates human judgment and serves as an automatic evaluator of the proactiveness of LLM agents.

Small Language Models: Survey, Measurements, and Insights

1 code implementation24 Sep 2024 Zhenyan Lu, Xiang Li, Dongqi Cai, Rongjie Yi, Fangming Liu, Xiwen Zhang, Nicholas D. Lane, Mengwei Xu

Small language models (SLMs), despite their widespread adoption in modern smart devices, have received significantly less academic attention compared to their large language model (LLM) counterparts, which are predominantly deployed in data centers and cloud environments.

Benchmarking Decoder +5

OMEGA: Efficient Occlusion-Aware Navigation for Air-Ground Robot in Dynamic Environments via State Space Model

no code implementations20 Aug 2024 Junming Wang, Xiuxian Guan, Zekai Sun, Tianxiang Shen, Dong Huang, Fangming Liu, Heming Cui

These blocks efficiently extract semantic and geometric features in 3D environments with linear complexity, ensuring that the network can learn long-distance dependencies to improve prediction accuracy.

Disaster Response Mamba

TrimCaching: Parameter-sharing AI Model Caching in Wireless Edge Networks

no code implementations7 May 2024 Guanqiao Qu, Zheng Lin, Fangming Liu, Xianhao Chen, Kaibin Huang

To this end, we formulate a parameter-sharing model placement problem to maximize the cache hit ratio in multi-edge wireless networks by balancing the fundamental tradeoff between storage efficiency and service latency.

Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs

1 code implementation16 Dec 2023 Aodong Chen, Fei Xu, Li Han, Yuan Dong, Li Chen, Zhi Zhou, Fangming Liu

GPUs have become the \emph{defacto} hardware devices for accelerating Deep Neural Network (DNN) inference workloads.

Scheduling

On-edge Multi-task Transfer Learning: Model and Practice with Data-driven Task Allocation

no code implementations6 Jul 2021 Zimu Zheng, Qiong Chen, Chuang Hu, Dan Wang, Fangming Liu

We then show that task allocation with task importance for MTL (TATIM) is a variant of the NP-complete Knapsack problem, where the complicated computation to solve this problem needs to be conducted repeatedly under varying contexts.

Computational Efficiency Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.