no code implementations • 1 Apr 2024 • Achintya Kundu, Fabian Lim, Aaron Chew, Laura Wynter, Penny Chong, Rhui Dih Lee
Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced.
no code implementations • 24 Oct 2021 • Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
In this work, we aim to close this gap by studying a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
no code implementations • 9 Dec 2020 • Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
In this work, we propose a detection strategy to identify adversarial support sets, aimed at destroying the understanding of a few-shot classifier for a certain class.
no code implementations • 11 Nov 2020 • Penny Chong, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
We compare performances in terms of the classification, explanation quality, and outlier detection of our proposed network with other baselines.
no code implementations • 24 Jan 2020 • Penny Chong, Lukas Ruff, Marius Kloft, Alexander Binder
However, deep SVDD suffers from hypersphere collapse -- also known as mode collapse, if the architecture of the model does not comply with certain architectural constraints, e. g. the removal of bias terms.