1 code implementation • 2 Feb 2024 • Tiansheng Huang, Sihao Hu, Ling Liu
The new paradigm of finetuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the finetuning to produce an alignment-broken model.
1 code implementation • 2 Feb 2024 • Sihao Hu, Tiansheng Huang, Ling Liu
We introduce PokeLLMon, the first LLM-embodied agent that achieves human-parity performance in tactical battle games, as demonstrated in Pokemon battles.
1 code implementation • 2 Oct 2023 • Sihao Hu, Tiansheng Huang, Fatih İlhan, Selim Furkan Tekin, Ling Liu
The goal of auditor is to yield a broad spectrum of vulnerabilities with the hope of encompassing the correct answer, whereas the goal of critic that evaluates the validity of identified vulnerabilities is to minimize the number of false positives.
1 code implementation • 21 Feb 2023 • Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, DaCheng Tao
Federated learning is an emerging distributed machine learning framework which jointly trains a global model via a large number of local devices with data privacy protections.
1 code implementation • 21 Feb 2023 • Tiansheng Huang, Li Shen, Yan Sun, Weiwei Lin, DaCheng Tao
Personalized federated learning, as a variant of federated learning, trains customized models for clients using their heterogeneously distributed data.
1 code implementation • 15 Jan 2023 • Fatih Ilhan, Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Selim Tekin, Wenqi Wei, Yanzhao Wu, Myungjin Lee, Ramana Kompella, Hugo Latapie, Gaowen Liu, Ling Liu
Instead of having every sample go through all DNN layers during prediction, EENet learns an early exit scheduler, which can intelligently terminate the inference earlier for certain predictions, which the model has high confidence of early exit.
no code implementations • 27 Jan 2022 • Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, DaCheng Tao
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
no code implementations • 29 Sep 2021 • Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, DaCheng Tao
Federated learning (FL) is particularly vulnerable to heterogeneously distributed data, since a common global model in FL may not adapt to the heterogeneous data distribution of each user.
no code implementations • 10 Feb 2021 • Tiansheng Huang, Weiwei Lin, Xiaobin Hong, Xiumin Wang, Qingbo Wu, Rui Li, Ching-Hsien Hsu, Albert Y. Zomaya
With astonishing speed, bandwidth, and scale, Mobile Edge Computing (MEC) has played an increasingly important role in the next generation of connectivity and service delivery.
no code implementations • 17 Nov 2020 • Tiansheng Huang, Weiwei Lin, Li Shen, Keqin Li, Albert Y. Zomaya
Federated Learning (FL), arising as a privacy-preserving machine learning paradigm, has received notable attention from the public.
no code implementations • 3 Nov 2020 • Tiansheng Huang, Weiwei Lin, Wentai Wu, Ligang He, Keqin Li, Albert Y. Zomaya
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.