Search Results for author: Tiansheng Huang

Found 11 papers, 6 papers with code

Vaccine: Perturbation-aware Alignment for Large Language Model

1 code implementation2 Feb 2024 Tiansheng Huang, Sihao Hu, Ling Liu

The new paradigm of finetuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the finetuning to produce an alignment-broken model.

Language Modelling Large Language Model

PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language Models

1 code implementation2 Feb 2024 Sihao Hu, Tiansheng Huang, Ling Liu

We introduce PokeLLMon, the first LLM-embodied agent that achieves human-parity performance in tactical battle games, as demonstrated in Pokemon battles.

Action Generation Decision Making +1

Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives

1 code implementation2 Oct 2023 Sihao Hu, Tiansheng Huang, Fatih İlhan, Selim Furkan Tekin, Ling Liu

The goal of auditor is to yield a broad spectrum of vulnerabilities with the hope of encompassing the correct answer, whereas the goal of critic that evaluates the validity of identified vulnerabilities is to minimize the number of false positives.

Language Modelling Large Language Model +1

FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy

1 code implementation21 Feb 2023 Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, DaCheng Tao

Federated learning is an emerging distributed machine learning framework which jointly trains a global model via a large number of local devices with data privacy protections.

Federated Learning

Fusion of Global and Local Knowledge for Personalized Federated Learning

1 code implementation21 Feb 2023 Tiansheng Huang, Li Shen, Yan Sun, Weiwei Lin, DaCheng Tao

Personalized federated learning, as a variant of federated learning, trains customized models for clients using their heterogeneously distributed data.

Personalized Federated Learning

Adaptive Deep Neural Network Inference Optimization with EENet

1 code implementation15 Jan 2023 Fatih Ilhan, Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Selim Tekin, Wenqi Wei, Yanzhao Wu, Myungjin Lee, Ramana Kompella, Hugo Latapie, Gaowen Liu, Ling Liu

Instead of having every sample go through all DNN layers during prediction, EENet learns an early exit scheduler, which can intelligently terminate the inference earlier for certain predictions, which the model has high confidence of early exit.

Inference Optimization Scheduling +1

Achieving Personalized Federated Learning with Sparse Local Models

no code implementations27 Jan 2022 Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, DaCheng Tao

To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.

Personalized Federated Learning

On Heterogeneously Distributed Data, Sparsity Matters

no code implementations29 Sep 2021 Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, DaCheng Tao

Federated learning (FL) is particularly vulnerable to heterogeneously distributed data, since a common global model in FL may not adapt to the heterogeneous data distribution of each user.

Personalized Federated Learning

Adaptive Processor Frequency Adjustment for Mobile Edge Computing with Intermittent Energy Supply

no code implementations10 Feb 2021 Tiansheng Huang, Weiwei Lin, Xiaobin Hong, Xiumin Wang, Qingbo Wu, Rui Li, Ching-Hsien Hsu, Albert Y. Zomaya

With astonishing speed, bandwidth, and scale, Mobile Edge Computing (MEC) has played an increasingly important role in the next generation of connectivity and service delivery.

Edge-computing

Stochastic Client Selection for Federated Learning with Volatile Clients

no code implementations17 Nov 2020 Tiansheng Huang, Weiwei Lin, Li Shen, Keqin Li, Albert Y. Zomaya

Federated Learning (FL), arising as a privacy-preserving machine learning paradigm, has received notable attention from the public.

Fairness Federated Learning +1

An Efficiency-boosting Client Selection Scheme for Federated Learning with Fairness Guarantee

no code implementations3 Nov 2020 Tiansheng Huang, Weiwei Lin, Wentai Wu, Ligang He, Keqin Li, Albert Y. Zomaya

The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.

Distributed Computing Fairness +1

Cannot find the paper you are looking for? You can Submit a new open access paper.