no code implementations • 15 Feb 2021 • Mingu Kang, Trung Quang Tran, Seungju Cho, Daeyoung Kim
Adversarial attack is aimed at fooling the target classifier with imperceptible perturbation.
no code implementations • 12 Feb 2021 • Trung Quang Tran, Mingu Kang, Daeyoung Kim
We obtain promising results (4. 21% error rate on CIFAR-10 with 4000 labels, 22. 32% error rate on CIFAR-100 with 10000 labels, and 2. 19% error rate on SVHN with 1000 labels) when the amount of labeled data is sufficient to learn semantics-oriented similarity representation.