1 code implementation • 21 Feb 2024 • Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein
It has recently been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements.
1 code implementation • 28 Feb 2023 • Alex Stein, Avi Schwarzschild, Michael Curry, Tom Goldstein, John Dickerson
It has been shown that neural networks can be used to approximate optimal mechanisms while satisfying the constraints that an auction be strategyproof and individually rational.