Search Results for author: Cheng-Long Wang

Found 6 papers, 1 papers with code

Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Approximate Unlearning Completeness

no code implementations19 Mar 2024 Cheng-Long Wang, Qi Li, Zihang Xiang, Yinzhi Cao, Di Wang

Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups.

Machine Unlearning

MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning

no code implementations17 Feb 2024 Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Lijie Hu, Di Wang

Adapting large language models (LLMs) to new domains/tasks and enabling them to be efficient lifelong learners is a pivotal challenge.

Communication Efficient and Provable Federated Unlearning

no code implementations19 Jan 2024 Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di Wang

We start by giving a rigorous definition of \textit{exact} federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data.

Federated Learning

Differentially Private Non-convex Learning for Multi-layer Neural Networks

no code implementations12 Oct 2023 Hanpu Shen, Cheng-Long Wang, Zihang Xiang, Yiming Ying, Di Wang

This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node.

Stochastic Optimization

Inductive Graph Unlearning

1 code implementation6 Apr 2023 Cheng-Long Wang, Mengdi Huai, Di Wang

To extend machine unlearning to graph data, \textit{GraphEraser} has been proposed.

Fairness Graph Learning +2

High Dimensional Statistical Estimation under Uniformly Dithered One-bit Quantization

no code implementations26 Feb 2022 Junren Chen, Cheng-Long Wang, Michael K. Ng, Di Wang

In heavy-tailed regime, while the rates of our estimators become essentially slower, these results are either the first ones in an 1-bit quantized and heavy-tailed setting, or already improve on existing comparable results from some respect.

Low-Rank Matrix Completion Quantization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.