Search Results for author: Shengyuan Hu

Found 12 papers, 7 papers with code

Position: LLM Unlearning Benchmarks are Weak Measures of Progress

no code implementations3 Oct 2024 Pratiksha Thaker, Shengyuan Hu, Neil Kale, Yash Maurya, Zhiwei Steven Wu, Virginia Smith

Unlearning methods have the potential to improve the privacy and safety of large language models (LLMs) by removing sensitive or harmful information post hoc.

Position

Jogging the Memory of Unlearned LLMs Through Targeted Relearning Attacks

no code implementations19 Jun 2024 Shengyuan Hu, Yiwei Fu, Zhiwei Steven Wu, Virginia Smith

Machine unlearning is a promising approach to mitigate undesirable memorization of training data in LLMs.

Machine Unlearning Memorization

Privacy Amplification for the Gaussian Mechanism via Bounded Support

no code implementations7 Mar 2024 Shengyuan Hu, Saeed Mahloujifar, Virginia Smith, Kamalika Chaudhuri, Chuan Guo

Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.

Guardrail Baselines for Unlearning in LLMs

1 code implementation5 Mar 2024 Pratiksha Thaker, Yash Maurya, Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith

Recent work has demonstrated that finetuning is a promising approach to 'unlearn' concepts from large language models.

No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices

1 code implementation25 Feb 2024 Qi Pang, Shengyuan Hu, Wenting Zheng, Virginia Smith

Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications.

Navigate

Federated Learning as a Network Effects Game

no code implementations16 Feb 2023 Shengyuan Hu, Dung Daniel Ngo, Shuran Zheng, Virginia Smith, Zhiwei Steven Wu

Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data.

Federated Learning

On Privacy and Personalization in Cross-Silo Federated Learning

1 code implementation16 Jun 2022 Ziyu Liu, Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith

While the application of differential privacy (DP) has been well-studied in cross-device federated learning (FL), there is a lack of work considering DP and its implications for cross-silo FL, a setting characterized by a limited number of clients each containing many data subjects.

Federated Learning Multi-Task Learning

FedSynth: Gradient Compression via Synthetic Data in Federated Learning

1 code implementation4 Apr 2022 Shengyuan Hu, Jack Goetz, Kshitiz Malik, Hongyuan Zhan, Zhe Liu, Yue Liu

Model compression is important in federated learning (FL) with large models to reduce communication cost.

Federated Learning Model Compression

Fair Federated Learning via Bounded Group Loss

no code implementations18 Mar 2022 Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith

In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.

Fairness Federated Learning

Private Multi-Task Learning: Formulation and Applications to Federated Learning

1 code implementation30 Aug 2021 Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith

Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously.

BIG-bench Machine Learning Distributed Optimization +2

A New Defense Against Adversarial Images: Turning a Weakness into a Strength

1 code implementation NeurIPS 2019 Tao Yu, Shengyuan Hu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger

Natural images are virtually surrounded by low-density misclassified regions that can be efficiently discovered by gradient-guided search --- enabling the generation of adversarial images.

Adversarial Defense

Cannot find the paper you are looking for? You can Submit a new open access paper.