Search Results for author: Shangguang Wang

Found 20 papers, 8 papers with code

Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction

no code implementations18 Apr 2024 Qian Li, Cheng Ji, Shu Guo, Yong Zhao, Qianren Mao, Shangguang Wang, Yuntao Wei, JianXin Li

Existing methods are limited by their neglect of the multiple entity pairs in one sentence sharing very similar contextual information (ie, the same text and image), resulting in increased difficulty in the MMRE task.

Relation Relation Extraction +1

Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data

no code implementations22 Mar 2024 Nan Jiang, Haitao Yuan, Jianing Si, Minxiao Chen, Shangguang Wang

The next point-of-interest (POI) prediction is a significant task in location-based services, yet its complexity arises from the consolidation of spatial and semantic intent.

Context-based Fast Recommendation Strategy for Long User Behavior Sequence in Meituan Waimai

no code implementations19 Mar 2024 Zhichao Feng, Junjiie Xie, Kaiyuan Li, Yu Qin, Pengfei Wang, Qianzhong Li, Bin Yin, Xiang Li, Wei Lin, Shangguang Wang

We first identify contexts that share similar user preferences with the target context and then locate the corresponding PoIs based on these identified contexts.

Sequential Recommendation

FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked RDMA Transmission

no code implementations1 Mar 2024 Zeling Zhang, Dongqi Cai, Yiran Zhang, Mengwei Xu, Shangguang Wang, Ao Zhou

Communication overhead is a significant bottleneck in federated learning (FL), which has been exaggerated with the increasing size of AI models.

Federated Learning

A Survey of Resource-efficient LLM and Multimodal Foundation Models

1 code implementation16 Jan 2024 Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, QiPeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, Qiyang Zhang, Zhenyan Lu, Li Zhang, Shangguang Wang, Yuanchun Li, Yunxin Liu, Xin Jin, Xuanzhe Liu

Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment.

Mobile Foundation Model as Firmware

1 code implementation28 Aug 2023 Jinliang Yuan, Chen Yang, Dongqi Cai, Shihe Wang, Xin Yuan, Zeling Zhang, Xiang Li, Dingge Zhang, Hanzi Mei, Xianqing Jia, Shangguang Wang, Mengwei Xu

Concurrently, each app contributes a concise, offline fine-tuned "adapter" tailored to distinct downstream tasks.

EdgeMoE: Fast On-Device Inference of MoE-based Large Language Models

no code implementations28 Aug 2023 Rongjie Yi, Liwei Guo, Shiyun Wei, Ao Zhou, Shangguang Wang, Mengwei Xu

Large Language Models (LLMs) such as GPTs and LLaMa have ushered in a revolution in machine intelligence, owing to their exceptional capabilities in a wide range of machine learning tasks.

Computational Efficiency

FwdLLM: Efficient FedLLM using Forward Gradient

1 code implementation26 Aug 2023 Mengwei Xu, Dongqi Cai, Yaozong Wu, Xiang Li, Shangguang Wang

Federated Learning (FL), a method to preserve user data privacy, is often employed in fine-tuning LLMs to downstream mobile tasks, an approach known as FedLLM.

Federated Learning

Seed Feature Maps-based CNN Models for LEO Satellite Remote Sensing Services

no code implementations12 Aug 2023 Zhichao Lu, Chuntao Ding, Shangguang Wang, Ran Cheng, Felix Juefei-Xu, Vishnu Naresh Boddeti

However, the limited resources available on LEO satellites contrast with the demands of resource-intensive CNN models, necessitating the adoption of ground-station server assistance for training and updating these models.

Semantic Segmentation

Mitigating Task Interference in Multi-Task Learning via Explicit Task Routing with Non-Learnable Primitives

no code implementations CVPR 2023 Chuntao Ding, Zhichao Lu, Shangguang Wang, Ran Cheng, Vishnu Naresh Boddeti

Our key idea is to employ non-learnable primitives to extract a diverse set of task-agnostic features and recombine them into a shared branch common to all tasks and explicit task-specific branches reserved for each task.

Multi-Task Learning

TFormer: A Transmission-Friendly ViT Model for IoT Devices

no code implementations15 Feb 2023 Zhichao Lu, Chuntao Ding, Felix Juefei-Xu, Vishnu Naresh Boddeti, Shangguang Wang, Yun Yang

The high performance and small number of model parameters and FLOPs of TFormer are attributed to the proposed hybrid layer and the proposed partially connected feed-forward network (PCS-FFN).

Image Classification object-detection +2

Federated Few-Shot Learning for Mobile NLP

1 code implementation12 Dec 2022 Dongqi Cai, Shangguang Wang, Yaozong Wu, Felix Xiaozhu Lin, Mengwei Xu

Such an inadequacy of data labels is known as a few-shot scenario; it becomes the key blocker for mobile NLP applications.

Few-Shot Learning Privacy Preserving

Towards Practical Few-shot Federated NLP

no code implementations1 Dec 2022 Dongqi Cai, Yaozong Wu, Haitao Yuan, Shangguang Wang, Felix Xiaozhu Lin, Mengwei Xu

To address these challenges, we first introduce a data generator for federated few-shot learning tasks, which encompasses the quantity and skewness of scarce labeled data in a realistic setting.

Data Augmentation Federated Learning +1

FedAdapter: Efficient Federated Learning for Modern NLP

1 code implementation20 May 2022 Dongqi Cai, Yaozong Wu, Shangguang Wang, Felix Xiaozhu Lin, Mengwei Xu

A key challenge is to properly configure the depth and width of adapters, to which the training speed and efficiency is highly sensitive.

Federated Learning

Towards Sustainable Satellite Edge Computing

no code implementations10 Mar 2022 Qing Li, Shangguang Wang, Xiao Ma, Ao Zhou, Fangchun Yang

Recently, Low Earth Orbit (LEO) satellites experience rapid development and satellite edge computing emerges to address the limitation of bent-pipe architecture in existing satellite systems.

Earth Observation Edge-computing +1

Hierarchical Federated Learning through LAN-WAN Orchestration

no code implementations22 Oct 2020 Jinliang Yuan, Mengwei Xu, Xiao Ma, Ao Zhou, Xuanzhe Liu, Shangguang Wang

Our proposed FL can accelerate the learning process and reduce the monetary cost with frequent local aggregation in the same LAN and infrequent global aggregation on a cloud across WAN.

Federated Learning

Federated Neural Architecture Search

no code implementations15 Feb 2020 Jinliang Yuan, Mengwei Xu, Yuxin Zhao, Kaigui Bian, Gang Huang, Xuanzhe Liu, Shangguang Wang

To preserve user privacy while enabling mobile intelligence, techniques have been proposed to train deep neural networks on decentralized data.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.