no code implementations • 17 Nov 2024 • Yu Jiang, Xindi Tong, Ziyao Liu, Huanyi Ye, Chee Wei Tan, Kwok-Yan Lam
Extensive experimental results demonstrate that FedADP effectively manages the trade-off between unlearning efficiency and privacy protection.
no code implementations • 23 Aug 2024 • Chen Chen, Ziyao Liu, Weifeng Jiang, Si Qi Goh, Kwok-Yan Lam
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems.
no code implementations • 15 May 2024 • Weifei Jin, Yuxin Cao, Junjie Su, Qi Shen, Kai Ye, Derui Wang, Jie Hao, Ziyao Liu
In this paper, we propose an attack on ASR systems based on user-customized style transfer.
no code implementations • 8 Apr 2024 • Yihe Fan, Yuxin Cao, Ziyu Zhao, Ziyao Liu, Shaofeng Li
Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities that increasingly influence various aspects of our daily lives, constantly defining the new boundary of Artificial General Intelligence (AGI).
no code implementations • 31 Mar 2024 • Jingyu Wang, Niantai Jing, Ziyao Liu, Jie Nie, Yuxin Qi, Chi-Hung Chi, Kwok-Yan Lam
Additionally, we extract inconsistent regions between coarse similar regions obtained through self-correlation calculations and regions composed of prototypes.
no code implementations • 29 Mar 2024 • Jiani Fan, Minrui Xu, Ziyao Liu, Huanyi Ye, Chaojie Gu, Dusit Niyato, Kwok-Yan Lam
Artificial Intelligence-Generated Content (AIGC) refers to the paradigm of automated content generation utilizing AI models.
no code implementations • 20 Mar 2024 • Ziyao Liu, Huanyi Ye, Chen Chen, Yongsen Zheng, Kwok-Yan Lam
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI by removing the influence of specific data from trained Machine Learning (ML) models.
1 code implementation • 8 Mar 2024 • Haoxin Xu, Zezheng Zhao, Yuxin Cao, Chunyu Chen, Hao Ge, Ziyao Liu
To overcome this limitation and enhance the reconstruction of 3D structural features, we propose an innovative approach that integrates existing 2D features with 3D features to guide the model learning process.
no code implementations • 16 Jan 2024 • Yu Jiang, Jiyuan Shen, Ziyao Liu, Chee Wei Tan, Kwok-Yan Lam
Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients manipulate their updates to affect the global model.
no code implementations • 17 Oct 2020 • Jiale Guo, Ziyao Liu, Kwok-Yan Lam, Jun Zhao, Yiqiang Chen, Chaoping Xing
The situation is exacerbated by the cloud-based implementation of digital services when user data are captured and stored in distributed locations, hence aggregation of the user data for ML could be a serious breach of privacy regulations.
Cryptography and Security Distributed, Parallel, and Cluster Computing
no code implementations • 24 Jul 2020 • Ziyao Liu, Ivan Tjuawinata, Chaoping Xing, Kwok-Yan Lam
The application of secure multiparty computation (MPC) in machine learning, especially privacy-preserving neural network training, has attracted tremendous attention from the research community in recent years.