1 code implementation • 1 Mar 2025 • Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Zachary Yahn, Yichang Xu, Ling Liu
While safety alignment has been extensively studied for LLM, there is still a large research gap for Large Reasoning Models (LRMs) that equip with improved reasoning capability.
1 code implementation • 6 Feb 2025 • Selim Furkan Tekin, Fatih Ilhan, Tiansheng Huang, Sihao Hu, Zachary Yahn, Ling Liu
First, we develop an agent-fusion framework for encouraging multiple LLM based agents to collaborate in producing the final inference output for each LLM query.
no code implementations • 31 Jan 2025 • Yan Sun, Tiansheng Huang, Liang Ding, Li Shen, DaCheng Tao
Zeroth-order optimization (ZO) has demonstrated remarkable promise in efficient fine-tuning tasks for Large Language Models (LLMs).
1 code implementation • 30 Jan 2025 • Yibo Wang, Tiansheng Huang, Li Shen, Huanjin Yao, Haotian Luo, Rui Liu, Naiqiang Tan, Jiaxing Huang, DaCheng Tao
Mainstream defenses aim to vaccinate the model such that the later harmful fine-tuning attack is less effective.
1 code implementation • 29 Jan 2025 • Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu
By designing a new red-teaming method, we in this paper show that purely relying on the moderation guardrail for data filtration is not reliable.
1 code implementation • 26 Nov 2024 • Selim Furkan Tekin, Fatih Ilhan, Tiansheng Huang, Sihao Hu, Zachary Yahn, Ling Liu
The former penalizes the selection errors of the expert-router, and the latter mediates the expert weights drifting during fine-tuning and dynamically adjusts the fusion behavior of the resulting model by canalizing the activations on the experts.
1 code implementation • 13 Oct 2024 • Guozhi Liu, Weiwei Lin, Tiansheng Huang, Ruichao Mo, Qi Mu, Li Shen
Second, instead of applying uniform perturbation across all layers, T-Vaccine only applies perturbation to the safety-critical layers while keeping other layers frozen during training.
1 code implementation • 4 Oct 2024 • Selim Furkan Tekin, Fatih Ilhan, Tiansheng Huang, Sihao Hu, Ling Liu
This paper presents LLM-TOPLA, a diversity-optimized LLM ensemble method with three unique properties: (i) We introduce the focal diversity metric to capture the diversity-performance correlation among component LLMs of an ensemble.
2 code implementations • 26 Sep 2024 • Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu
To clear up concern, this paper provide a comprehensive overview to three aspects of harmful fine-tuning: attacks setting, defense design and evaluation methodology.
1 code implementation • 3 Sep 2024 • Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu
For the first time in the literature, we in this paper show that \textit{harmful perturbation} over the model weights should be the root cause of alignment-broken of harmful fine-tuning.
3 code implementations • 18 Aug 2024 • Tiansheng Huang, Gautam Bhattacharya, Pratik Joshi, Josh Kimball, Ling Liu
To this end, we propose Antidote, a post-fine-tuning stage solution, which remains \textbf{\textit{agnostic to the training hyper-parameters in the fine-tuning stage}}.
1 code implementation • 19 Jul 2024 • Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Ling Liu
Second, we incorporate a perceptibility optimization to preserve the visual quality of the protected facial images.
1 code implementation • 28 May 2024 • Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu
Recent studies show that Large Language Models (LLMs) with safety alignment can be jail-broken by fine-tuning on a dataset mixed with harmful data.
no code implementations • 5 Apr 2024 • Selim Furkan Tekin, Fatih Ilhan, Tiansheng Huang, Sihao Hu, Ka-Ho Chow, Margaret L. Loper, Ling Liu
This paper presents FusionShot, a focal diversity optimized few-shot ensemble learning approach for boosting the robustness and generalization performance of pre-trained few-shot models.
1 code implementation • 2 Apr 2024 • Sihao Hu, Tiansheng Huang, Gaowen Liu, Ramana Rao Kompella, Fatih Ilhan, Selim Furkan Tekin, Yichang Xu, Zachary Yahn, Ling Liu
The development of game agents holds a critical role in advancing towards Artificial General Intelligence.
1 code implementation • 2 Feb 2024 • Tiansheng Huang, Sihao Hu, Ling Liu
The new paradigm of finetuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the finetuning to produce an alignment-broken model.
1 code implementation • 2 Feb 2024 • Sihao Hu, Tiansheng Huang, Ling Liu
We introduce PokeLLMon, the first LLM-embodied agent that achieves human-parity performance in tactical battle games, as demonstrated in Pokemon battles.
1 code implementation • CVPR 2024 • Fatih Ilhan, Gong Su, Selim Furkan Tekin, Tiansheng Huang, Sihao Hu, Ling Liu
With the recent advances in vision transformers and large language models (LLMs) finetuning costly large models on downstream learning tasks poses significant challenges under limited computational resources.
1 code implementation • 2 Oct 2023 • Sihao Hu, Tiansheng Huang, Fatih İlhan, Selim Furkan Tekin, Ling Liu
The goal of auditor is to yield a broad spectrum of vulnerabilities with the hope of encompassing the correct answer, whereas the goal of critic that evaluates the validity of identified vulnerabilities is to minimize the number of false positives.
1 code implementation • 21 Feb 2023 • Tiansheng Huang, Li Shen, Yan Sun, Weiwei Lin, DaCheng Tao
Personalized federated learning, as a variant of federated learning, trains customized models for clients using their heterogeneously distributed data.
2 code implementations • 21 Feb 2023 • Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, DaCheng Tao
Federated learning is an emerging distributed machine learning framework which jointly trains a global model via a large number of local devices with data privacy protections.
1 code implementation • 15 Jan 2023 • Fatih Ilhan, Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Selim Tekin, Wenqi Wei, Yanzhao Wu, Myungjin Lee, Ramana Kompella, Hugo Latapie, Gaowen Liu, Ling Liu
Instead of having every sample go through all DNN layers during prediction, EENet learns an early exit scheduler, which can intelligently terminate the inference earlier for certain predictions, which the model has high confidence of early exit.
no code implementations • 27 Jan 2022 • Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, DaCheng Tao
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
no code implementations • 29 Sep 2021 • Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, DaCheng Tao
Federated learning (FL) is particularly vulnerable to heterogeneously distributed data, since a common global model in FL may not adapt to the heterogeneous data distribution of each user.
no code implementations • 10 Feb 2021 • Tiansheng Huang, Weiwei Lin, Xiaobin Hong, Xiumin Wang, Qingbo Wu, Rui Li, Ching-Hsien Hsu, Albert Y. Zomaya
With astonishing speed, bandwidth, and scale, Mobile Edge Computing (MEC) has played an increasingly important role in the next generation of connectivity and service delivery.
no code implementations • 17 Nov 2020 • Tiansheng Huang, Weiwei Lin, Li Shen, Keqin Li, Albert Y. Zomaya
Federated Learning (FL), arising as a privacy-preserving machine learning paradigm, has received notable attention from the public.
no code implementations • 3 Nov 2020 • Tiansheng Huang, Weiwei Lin, Wentai Wu, Ligang He, Keqin Li, Albert Y. Zomaya
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.