1 code implementation • 12 Jun 2023 • Guneet S. Dhillon, George Deligiannidis, Tom Rainforth
While conformal predictors reap the benefits of rigorous statistical guarantees on their error frequency, the size of their corresponding prediction sets is critical to their practical utility.
2 code implementations • NeurIPS 2021 • Sébastien M. R. Arnold, Guneet S. Dhillon, Avinash Ravichandran, Stefano Soatto
Episodic training is a core ingredient of few-shot learning to train models on tasks with limited labelled data.
no code implementations • 30 Sep 2020 • Guneet S. Dhillon, Nicholas Carlini
Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense to adversarial examples that was attacked and found to be broken by the "Obfuscated Gradients" paper (Athalye et al., 2018).
3 code implementations • ICLR 2020 • Guneet S. Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto
When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters.
1 code implementation • ICLR 2018 • Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, Anima Anandkumar
Neural networks are known to be vulnerable to adversarial examples.