no code implementations • 27 Apr 2023 • Ramneet Kaur, Yiannis Kantaros, Wenwen Si, James Weimer, Insup Lee
Nevertheless, DNN models have proven to be vulnerable to adversarial digital and physical attacks.
no code implementations • 21 Feb 2023 • Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang, Elena Bernardis, Oleg Sokolsky, Insup Lee
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e. g. training class labels).
1 code implementation • 24 Jul 2022 • Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.
1 code implementation • 13 Jun 2022 • Kaustubh Sridhar, Souradeep Dutta, Ramneet Kaur, James Weimer, Oleg Sokolsky, Insup Lee
Algorithm design of AT and its variants are focused on training models at a specified perturbation strength $\epsilon$ and only using the feedback from the performance of that $\epsilon$-robust model to improve the algorithm.
no code implementations • 7 Jan 2022 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, Insup Lee
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
no code implementations • 13 Aug 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, Insup Lee
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).
no code implementations • 23 Mar 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs.