no code implementations • 18 Apr 2024 • Qian Li, Cheng Ji, Shu Guo, Yong Zhao, Qianren Mao, Shangguang Wang, Yuntao Wei, JianXin Li
Existing methods are limited by their neglect of the multiple entity pairs in one sentence sharing very similar contextual information (ie, the same text and image), resulting in increased difficulty in the MMRE task.
1 code implementation • 25 Mar 2024 • Alireza Furutanpey, Qiyang Zhang, Philipp Raith, Tobias Pfandzelter, Shangguang Wang, Schahram Dustdar
Further, it embeds context and leverages inter-tile dependencies to lower transfer costs with negligible overhead.
no code implementations • 22 Mar 2024 • Nan Jiang, Haitao Yuan, Jianing Si, Minxiao Chen, Shangguang Wang
The next point-of-interest (POI) prediction is a significant task in location-based services, yet its complexity arises from the consolidation of spatial and semantic intent.
no code implementations • 19 Mar 2024 • Zhichao Feng, Junjiie Xie, Kaiyuan Li, Yu Qin, Pengfei Wang, Qianzhong Li, Bin Yin, Xiang Li, Wei Lin, Shangguang Wang
We first identify contexts that share similar user preferences with the target context and then locate the corresponding PoIs based on these identified contexts.
no code implementations • 1 Mar 2024 • Zeling Zhang, Dongqi Cai, Yiran Zhang, Mengwei Xu, Shangguang Wang, Ao Zhou
Communication overhead is a significant bottleneck in federated learning (FL), which has been exaggerated with the increasing size of AI models.
1 code implementation • 16 Jan 2024 • Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, QiPeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, Qiyang Zhang, Zhenyan Lu, Li Zhang, Shangguang Wang, Yuanchun Li, Yunxin Liu, Xin Jin, Xuanzhe Liu
Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment.
no code implementations • 28 Aug 2023 • Rongjie Yi, Liwei Guo, Shiyun Wei, Ao Zhou, Shangguang Wang, Mengwei Xu
Large Language Models (LLMs) such as GPTs and LLaMa have ushered in a revolution in machine intelligence, owing to their exceptional capabilities in a wide range of machine learning tasks.
1 code implementation • 28 Aug 2023 • Jinliang Yuan, Chen Yang, Dongqi Cai, Shihe Wang, Xin Yuan, Zeling Zhang, Xiang Li, Dingge Zhang, Hanzi Mei, Xianqing Jia, Shangguang Wang, Mengwei Xu
Concurrently, each app contributes a concise, offline fine-tuned "adapter" tailored to distinct downstream tasks.
1 code implementation • 26 Aug 2023 • Mengwei Xu, Dongqi Cai, Yaozong Wu, Xiang Li, Shangguang Wang
Federated Learning (FL), a method to preserve user data privacy, is often employed in fine-tuning LLMs to downstream mobile tasks, an approach known as FedLLM.
no code implementations • 12 Aug 2023 • Zhichao Lu, Chuntao Ding, Shangguang Wang, Ran Cheng, Felix Juefei-Xu, Vishnu Naresh Boddeti
However, the limited resources available on LEO satellites contrast with the demands of resource-intensive CNN models, necessitating the adoption of ground-station server assistance for training and updating these models.
no code implementations • CVPR 2023 • Chuntao Ding, Zhichao Lu, Shangguang Wang, Ran Cheng, Vishnu Naresh Boddeti
Our key idea is to employ non-learnable primitives to extract a diverse set of task-agnostic features and recombine them into a shared branch common to all tasks and explicit task-specific branches reserved for each task.
no code implementations • 15 Feb 2023 • Zhichao Lu, Chuntao Ding, Felix Juefei-Xu, Vishnu Naresh Boddeti, Shangguang Wang, Yun Yang
The high performance and small number of model parameters and FLOPs of TFormer are attributed to the proposed hybrid layer and the proposed partially connected feed-forward network (PCS-FFN).
1 code implementation • 12 Dec 2022 • Dongqi Cai, Shangguang Wang, Yaozong Wu, Felix Xiaozhu Lin, Mengwei Xu
Such an inadequacy of data labels is known as a few-shot scenario; it becomes the key blocker for mobile NLP applications.
no code implementations • 1 Dec 2022 • Dongqi Cai, Yaozong Wu, Haitao Yuan, Shangguang Wang, Felix Xiaozhu Lin, Mengwei Xu
To address these challenges, we first introduce a data generator for federated few-shot learning tasks, which encompasses the quantity and skewness of scarce labeled data in a realistic setting.
1 code implementation • 15 Jun 2022 • Rongjie Yi, Ting Cao, Ao Zhou, Xiao Ma, Shangguang Wang, Mengwei Xu
DNNs are ubiquitous on edge devices nowadays.
1 code implementation • 20 May 2022 • Dongqi Cai, Yaozong Wu, Shangguang Wang, Felix Xiaozhu Lin, Mengwei Xu
A key challenge is to properly configure the depth and width of adapters, to which the training speed and efficiency is highly sensitive.
no code implementations • 10 Mar 2022 • Qing Li, Shangguang Wang, Xiao Ma, Ao Zhou, Fangchun Yang
Recently, Low Earth Orbit (LEO) satellites experience rapid development and satellite edge computing emerges to address the limitation of bent-pipe architecture in existing satellite systems.
1 code implementation • 14 Feb 2022 • Qiyang Zhang, Xiang Li, Xiangying Che, Xiao Ma, Ao Zhou, Mengwei Xu, Shangguang Wang, Yun Ma, Xuanzhe Liu
Deploying deep learning (DL) on mobile devices has been a notable trend in recent years.
no code implementations • 22 Oct 2020 • Jinliang Yuan, Mengwei Xu, Xiao Ma, Ao Zhou, Xuanzhe Liu, Shangguang Wang
Our proposed FL can accelerate the learning process and reduce the monetary cost with frequent local aggregation in the same LAN and infrequent global aggregation on a cloud across WAN.
no code implementations • 15 Feb 2020 • Jinliang Yuan, Mengwei Xu, Yuxin Zhao, Kaigui Bian, Gang Huang, Xuanzhe Liu, Shangguang Wang
To preserve user privacy while enabling mobile intelligence, techniques have been proposed to train deep neural networks on decentralized data.