no code implementations • 23 Jan 2024 • Ee Yeo Keat, Zhang Hao, Alexander Matyasko, Basura Fernando
We introduce VidTFS, a Training-free, open-vocabulary video goal and action inference framework that combines the frozen vision foundational model (VFM) and large language model (LLM) with a novel dynamic Frame Selection module.
no code implementations • 15 Jun 2023 • Ishaan Singh Rawal, Alexander Matyasko, Shantanu Jaiswal, Basura Fernando, Cheston Tan
Consistent with the findings of QUAG, we find that most of the models achieve near-trivial performance on CLAVI.
3 code implementations • 3 Jun 2021 • Alexander Matyasko, Lap-Pui Chau
In this work, we introduce a fast, general and accurate adversarial attack that optimises the original non-convex constrained minimisation problem.
3 code implementations • 16 Jun 2019 • Alex Lamb, Vikas Verma, Kenji Kawaguchi, Alexander Matyasko, Savya Khosla, Juho Kannala, Yoshua Bengio
Adversarial robustness has become a central goal in deep learning, both in the theory and the practice.
1 code implementation • NeurIPS 2018 • Alexander Matyasko, Lap-Pui Chau
Our main idea is: adversarial examples for the robust classifier should be indistinguishable from the regular data of the adversarial target.
13 code implementations • 3 Oct 2016 • Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, Patrick McDaniel
An adversarial example library for constructing attacks, building defenses, and benchmarking both