no code implementations • 1 Jun 2023 • Natalie Abreu, Nathan Vaska, Victoria Helus
We evaluate whether the method increases semantic alignment by evaluating model performance on adversarially perturbed data, with the idea that it should be easier for an adversary to switch one class to a similarly represented class.
no code implementations • 1 Jun 2023 • Nathan Vaska, Victoria Helus
The impressive advances and applications of large language and joint language-and-visual understanding models has led to an increased need for methods of probing their potential reasoning capabilities.
no code implementations • 21 Nov 2022 • Natalie Abreu, Nathan Vaska, Victoria Helus
Most robust training techniques aim to improve model accuracy on perturbed inputs; as an alternate form of robustness, we aim to reduce the severity of mistakes made by neural networks in challenging conditions.
no code implementations • 17 Mar 2022 • Nathan Vaska, Kevin Leahy, Victoria Helus
In this work, we leverage contextual awareness for the anomaly detection problem.
no code implementations • 8 Jul 2020 • Justin Goodwin, Olivia Brown, Victoria Helus
Recent work in adversarial training, a form of robust optimization in which the model is optimized against adversarial examples, demonstrates the ability to improve performance sensitivities to perturbations and yield feature representations that are more interpretable.