Search Results for author: Rui Kong

Found 7 papers, 3 papers with code

LoRA-Switch: Boosting the Efficiency of Dynamic LLM Adapters via System-Algorithm Co-design

no code implementations28 May 2024 Rui Kong, Qiyang Li, Xinyu Fang, Qingtian Feng, Qingfeng He, Yazhu Dong, Weijun Wang, Yuanchun Li, Linghe Kong, Yunxin Liu

Recent literature has found that an effective method to customize or further improve large language models (LLMs) is to add dynamic adapters, such as low-rank adapters (LoRA) with Mixture-of-Experts (MoE) structures.

A Gray-Box Stability Analysis Mechanism for Power Electronic Converters

no code implementations15 Apr 2024 Rui Kong, Subham Sahoo, Yubo Song, Frede Blaabjerg

This paper proposes a gray-box stability analysis mechanism based on data-driven dynamic mode decomposition (DMD) for commercial grid-tied power electronics converters with limited information on its control parameters and topology.

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

2 code implementations10 Jan 2024 Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu

Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.

ACT: Empowering Decision Transformer with Dynamic Programming via Advantage Conditioning

1 code implementation12 Sep 2023 Chen-Xiao Gao, Chenyang Wu, Mingjun Cao, Rui Kong, Zongzhang Zhang, Yang Yu

Third, we train an Advantage-Conditioned Transformer (ACT) to generate actions conditioned on the estimated advantages.

Action Generation

SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget

no code implementations29 Aug 2023 Rui Kong, Yuanchun Li, Qingtian Feng, Weijun Wang, Xiaozhou Ye, Ye Ouyang, Linghe Kong, Yunxin Liu

Mixture of experts (MoE) is a popular technique to improve capacity of Large Language Models (LLMs) with conditionally-activated parallel experts.

object-detection Object Detection +1

PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification

1 code implementation22 Aug 2023 Yizhen Yuan, Rui Kong, Shenghao Xie, Yuanchun Li, Yunxin Liu

However, most backdoor attacks have to modify the neural network models through training with poisoned data and/or direct model editing, which leads to a common but false belief that backdoor attack can be easily avoided by properly protecting the model.

Backdoor Attack Real-World Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.