Search Results for author: Neil Kale

Found 1 papers, 0 papers with code

Position: LLM Unlearning Benchmarks are Weak Measures of Progress

no code implementations3 Oct 2024 Pratiksha Thaker, Shengyuan Hu, Neil Kale, Yash Maurya, Zhiwei Steven Wu, Virginia Smith

Unlearning methods have the potential to improve the privacy and safety of large language models (LLMs) by removing sensitive or harmful information post hoc.

Position

Cannot find the paper you are looking for? You can Submit a new open access paper.